Science.gov

Sample records for algorithm called fast

  1. Data-adaptive algorithms for calling alleles in repeat polymorphisms.

    PubMed

    Stoughton, R; Bumgarner, R; Frederick, W J; McIndoe, R A

    1997-01-01

    Data-adaptive algorithms are presented for separating overlapping signatures of heterozygotic allele pairs in electrophoresis data. Application is demonstrated for human microsatellite CA-repeat polymorphisms in LiCor 4000 and ABI 373 data. The algorithms allow overlapping alleles to be called correctly in almost every case where a trained observer could do so, and provide a fast automated objective alternative to human reading of the gels. The algorithm also supplies an indication of confidence level which can be used to flag marginal cases for verification by eye, or as input to later stages of statistical analysis. PMID:9059812

  2. Automated DNA Base Pair Calling Algorithm

    1999-07-07

    The procedure solves the problem of calling the DNA base pair sequence from two channel electropherogram separations in an automated fashion. The core of the program involves a peak picking algorithm based upon first, second, and third derivative spectra for each electropherogram channel, signal levels as a function of time, peak spacing, base pair signal to noise sequence patterns, frequency vs ratio of the two channel histograms, and confidence levels generated during the run. Themore » ratios of the two channels at peak centers can be used to accurately and reproducibly determine the base pair sequence. A further enhancement is a novel Gaussian deconvolution used to determine the peak heights used in generating the ratio.« less

  3. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  4. A hybrid fast Hankel transform algorithm for electromagnetic modeling

    USGS Publications Warehouse

    Anderson, W.L.

    1989-01-01

    A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram (called HYBFHT) written in standard Fortran-77 provides a simple user interface to call either subalgorithm. The hybrid approach is an attempt to combine the best features of the two subalgorithms to minimize the user's coding requirements and to provide fast execution and good accuracy for a large class of electromagnetic problems involving various related Hankel transform sets with multiple arguments. Special cases of Hankel transforms of double-order and double-argument are discussed, where use of HYBFHT is shown to be advantageous for oscillatory kernal functions. -Author

  5. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O

  6. Fast diffraction computation algorithms based on FFT

    NASA Astrophysics Data System (ADS)

    Logofatu, Petre Catalin; Nascov, Victor; Apostol, Dan

    2010-11-01

    The discovery of the Fast Fourier transform (FFT) algorithm by Cooley and Tukey meant for diffraction computation what the invention of computers meant for computation in general. The computation time reduction is more significant for large input data, but generally FFT reduces the computation time with several orders of magnitude. This was the beginning of an entire revolution in optical signal processing and resulted in an abundance of fast algorithms for diffraction computation in a variety of situations. The property that allowed the creation of these fast algorithms is that, as it turns out, most diffraction formulae contain at their core one or more Fourier transforms which may be rapidly calculated using the FFT. The key in discovering a new fast algorithm is to reformulate the diffraction formulae so that to identify and isolate the Fourier transforms it contains. In this way, the fast scaled transformation, the fast Fresnel transformation and the fast Rayleigh-Sommerfeld transform were designed. Remarkable improvements were the generalization of the DFT to scaled DFT which allowed freedom to choose the dimensions of the output window for the Fraunhofer-Fourier and Fresnel diffraction, the mathematical concept of linearized convolution which thwarts the circular character of the discrete Fourier transform and allows the use of the FFT, and last but not least the linearized discrete scaled convolution, a new concept of which we claim priority.

  7. Fast Intersection Algorithms for Sorted Sequences

    NASA Astrophysics Data System (ADS)

    Baeza-Yates, Ricardo; Salinger, Alejandro

    This paper presents and analyzes a simple intersection algorithm for sorted sequences that is fast on average. It is related to the multiple searching problem and to merging. We present the worst and average case analysis, showing that in the former, the complexity nicely adapts to the smallest list size. In the latter case, it performs less comparisons than the total number of elements on both inputs, n and m, when n = αm (α> 1), achieving O(m log(n/m)) complexity. The algorithm is motivated by its application to fast query processing in Web search engines, where large intersections, or differences, must be performed fast. In this case we experimentally show that the algorithm is faster than previous solutions.

  8. Fast Fourier Transform algorithm design and tradeoffs

    NASA Technical Reports Server (NTRS)

    Kamin, Ray A., III; Adams, George B., III

    1988-01-01

    The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.

  9. Fast training algorithms for multilayer neural nets.

    PubMed

    Brent, R P

    1991-01-01

    An algorithm that is faster than back-propagation and for which it is not necessary to specify the number of hidden units in advance is described. The relationship with other fast pattern-recognition algorithms, such as algorithms based on k-d trees, is discussed. The algorithm has been implemented and tested on artificial problems, such as the parity problem, and on real problems arising in speech recognition. Experimental results, including training times and recognition accuracy, are given. Generally, the algorithm achieves accuracy as good as or better than nets trained using back-propagation. Accuracy is comparable to that for the nearest-neighbor algorithm, which is slower and requires more storage space.

  10. Fast local motion estimation algorithm using elementary motion detectors

    NASA Astrophysics Data System (ADS)

    Nakamura, Eiji; Nakamura, Takehito; Sawada, Katsutoshi

    2003-06-01

    This paper presnts a fast local motion estimation algorithm based on so called elementary motion detectors or EMDs. EMDs, modeling insect"s visual signal processing systems, have low computational complexity aspects and can thus be key components to realize such a fast local motion estimation algorithm. The contribution of the presented work is to introduce dual parameter estimators or DPEs by configuring EMDs so that they can estimate local motions in terms of both direction and speed mode parameters simultaneously. The estimated local motion vectors are displayed as arrows superimposed over video image frames. The developed algorithm is implmented in a DirectShow application by using Mircosoft"s DirectX runtime library and is evaluated using various types of video image sequences. It is found to be able to estimate local motion vectors in real time even in moderate PC computing platforms and hece no high profile hardware devices are needed for its real time operation.

  11. TADtool: visual parameter identification for TAD-calling algorithms

    PubMed Central

    Kruse, Kai; Hug, Clemens B.; Hernández-Rodríguez, Benjamín; Vaquerizas, Juan M.

    2016-01-01

    Summary: Eukaryotic genomes are hierarchically organized into topologically associating domains (TADs). The computational identification of these domains and their associated properties critically depends on the choice of suitable parameters of TAD-calling algorithms. To reduce the element of trial-and-error in parameter selection, we have developed TADtool: an interactive plot to find robust TAD-calling parameters with immediate visual feedback. TADtool allows the direct export of TADs called with a chosen set of parameters for two of the most common TAD calling algorithms: directionality and insulation index. It can be used as an intuitive, standalone application or as a Python package for maximum flexibility. Availability and implementation: TADtool is available as a Python package from GitHub (https://github.com/vaquerizaslab/tadtool) or can be installed directly via PyPI, the Python package index (tadtool). Contact: kai.kruse@mpi-muenster.mpg.de, jmv@mpi-muenster.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27318199

  12. MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.

  13. Fast deterministic algorithm for EEE components classification

    NASA Astrophysics Data System (ADS)

    Kazakovtsev, L. A.; Antamoshkin, A. N.; Masich, I. S.

    2015-10-01

    Authors consider the problem of automatic classification of the electronic, electrical and electromechanical (EEE) components based on results of the test control. Electronic components of the same type used in a high- quality unit must be produced as a single production batch from a single batch of the raw materials. Data of the test control are used for splitting a shipped lot of the components into several classes representing the production batches. Methods such as k-means++ clustering or evolutionary algorithms combine local search and random search heuristics. The proposed fast algorithm returns a unique result for each data set. The result is comparatively precise. If the data processing is performed by the customer of the EEE components, this feature of the algorithm allows easy checking of the results by a producer or supplier.

  14. Fast Algorithms for Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  15. An automatic and fast centerline extraction algorithm for virtual colonoscopy.

    PubMed

    Jiang, Guangxiang; Gu, Lixu

    2005-01-01

    This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406

  16. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  17. Fast proximity algorithm for MAP ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Li, Si; Krol, Andrzej; Shen, Lixin; Xu, Yuesheng

    2012-03-01

    We arrived at the fixed-point formulation of the total variation maximum a posteriori (MAP) regularized emission computed tomography (ECT) reconstruction problem and we proposed an iterative alternating scheme to numerically calculate the fixed point. We theoretically proved that our algorithm converges to unique solutions. Because the obtained algorithm exhibits slow convergence speed, we further developed the proximity algorithm in the transformed image space, i.e. the preconditioned proximity algorithm. We used the bias-noise curve method to select optimal regularization hyperparameters for both our algorithm and expectation maximization with total variation regularization (EM-TV). We showed in the numerical experiments that our proposed algorithms, with an appropriately selected preconditioner, outperformed conventional EM-TV algorithm in many critical aspects, such as comparatively very low noise and bias for Shepp-Logan phantom. This has major ramification for nuclear medicine because clinical implementation of our preconditioned fixed-point algorithms might result in very significant radiation dose reduction in the medical applications of emission tomography.

  18. Cumulative Reconstructor: fast wavefront reconstruction algorithm for Extremely Large Telescopes.

    PubMed

    Rosensteiner, Matthias

    2011-10-01

    The Cumulative Reconstructor (CuRe) is a new direct reconstructor for an optical wavefront from Shack-Hartmann wavefront sensor measurements. In this paper, the algorithm is adapted to realistic telescope geometries and the transition from modified Hudgin to Fried geometry is discussed. After a discussion of the noise propagation, we analyze the complexity of the algorithm. Our numerical tests confirm that the algorithm is very fast and accurate and can therefore be used for adaptive optics systems of Extremely Large Telescopes.

  19. Fast algorithms for transport models. Final report

    SciTech Connect

    Manteuffel, T.A.

    1994-10-01

    This project has developed a multigrid in space algorithm for the solution of the S{sub N} equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell {mu}-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE`s. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M)).

  20. BayesCall: A model-based base-calling algorithm for high-throughput short-read sequencing.

    PubMed

    Kao, Wei-Chun; Stevens, Kristian; Song, Yun S

    2009-10-01

    Extracting sequence information from raw images of fluorescence is the foundation underlying several high-throughput sequencing platforms. Some of the main challenges associated with this technology include reducing the error rate, assigning accurate base-specific quality scores, and reducing the cost of sequencing by increasing the throughput per run. To demonstrate how computational advancement can help to meet these challenges, a novel model-based base-calling algorithm, BayesCall, is introduced for the Illumina sequencing platform. Being founded on the tools of statistical learning, BayesCall is flexible enough to incorporate various features of the sequencing process. In particular, it can easily incorporate time-dependent parameters and model residual effects. This new approach significantly improves the accuracy over Illumina's base-caller Bustard, particularly in the later cycles of a sequencing run. For 76-cycle data on a standard viral sample, phiX174, BayesCall improves Bustard's average per-base error rate by approximately 51%. The probability of observing each base can be readily computed in BayesCall, and this probability can be transformed into a useful base-specific quality score with a high discrimination ability. A detailed study of BayesCall's performance is presented here. PMID:19661376

  1. FastDIRC: a fast Monte Carlo and reconstruction algorithm for DIRC detectors

    NASA Astrophysics Data System (ADS)

    Hardin, J.; Williams, M.

    2016-10-01

    FastDIRC is a novel fast Monte Carlo and reconstruction algorithm for DIRC detectors. A DIRC employs rectangular fused-silica bars both as Cherenkov radiators and as light guides. Cherenkov-photon imaging and time-of-propagation information are utilized by a DIRC to identify charged particles. GEANT4-based DIRC Monte Carlo simulations are extremely CPU intensive. The FastDIRC algorithm permits fully simulating a DIRC detector more than 10 000 times faster than using GEANT4. This facilitates designing a DIRC-reconstruction algorithm that improves the Cherenkov-angle resolution of a DIRC detector by ≈ 30% compared to existing algorithms. FastDIRC also greatly reduces the time required to study competing DIRC-detector designs.

  2. A fast SEQUEST cross correlation algorithm.

    PubMed

    Eng, Jimmy K; Fischer, Bernd; Grossmann, Jonas; Maccoss, Michael J

    2008-10-01

    The SEQUEST program was the first and remains one of the most widely used tools for assigning a peptide sequence within a database to a tandem mass spectrum. The cross correlation score is the primary score function implemented within SEQUEST and it is this score that makes the tool particularly sensitive. Unfortunately, this score is computationally expensive to calculate, and thus, to make the score manageable, SEQUEST uses a less sensitive but fast preliminary score and restricts the cross correlation to just the top 500 peptides returned by the preliminary score. Classically, the cross correlation score has been calculated using Fast Fourier Transforms (FFT) to generate the full correlation function. We describe an alternate method of calculating the cross correlation score that does not require FFTs and can be computed efficiently in a fraction of the time. The fast calculation allows all candidate peptides to be scored by the cross correlation function, potentially mitigating the need for the preliminary score, and enables an E-value significance calculation based on the cross correlation score distribution calculated on all candidate peptide sequences obtained from a sequence database. PMID:18774840

  3. Fast search algorithms for computational protein design.

    PubMed

    Traoré, Seydou; Roberts, Kyle E; Allouche, David; Donald, Bruce R; André, Isabelle; Schiex, Thomas; Barbe, Sophie

    2016-05-01

    One of the main challenges in computational protein design (CPD) is the huge size of the protein sequence and conformational space that has to be computationally explored. Recently, we showed that state-of-the-art combinatorial optimization technologies based on Cost Function Network (CFN) processing allow speeding up provable rigid backbone protein design methods by several orders of magnitudes. Building up on this, we improved and injected CFN technology into the well-established CPD package Osprey to allow all Osprey CPD algorithms to benefit from associated speedups. Because Osprey fundamentally relies on the ability of A* to produce conformations in increasing order of energy, we defined new A* strategies combining CFN lower bounds, with new side-chain positioning-based branching scheme. Beyond the speedups obtained in the new A*-CFN combination, this novel branching scheme enables a much faster enumeration of suboptimal sequences, far beyond what is reachable without it. Together with the immediate and important speedups provided by CFN technology, these developments directly benefit to all the algorithms that previously relied on the DEE/ A* combination inside Osprey* and make it possible to solve larger CPD problems with provable algorithms.

  4. Fast search algorithms for computational protein design.

    PubMed

    Traoré, Seydou; Roberts, Kyle E; Allouche, David; Donald, Bruce R; André, Isabelle; Schiex, Thomas; Barbe, Sophie

    2016-05-01

    One of the main challenges in computational protein design (CPD) is the huge size of the protein sequence and conformational space that has to be computationally explored. Recently, we showed that state-of-the-art combinatorial optimization technologies based on Cost Function Network (CFN) processing allow speeding up provable rigid backbone protein design methods by several orders of magnitudes. Building up on this, we improved and injected CFN technology into the well-established CPD package Osprey to allow all Osprey CPD algorithms to benefit from associated speedups. Because Osprey fundamentally relies on the ability of A* to produce conformations in increasing order of energy, we defined new A* strategies combining CFN lower bounds, with new side-chain positioning-based branching scheme. Beyond the speedups obtained in the new A*-CFN combination, this novel branching scheme enables a much faster enumeration of suboptimal sequences, far beyond what is reachable without it. Together with the immediate and important speedups provided by CFN technology, these developments directly benefit to all the algorithms that previously relied on the DEE/ A* combination inside Osprey* and make it possible to solve larger CPD problems with provable algorithms. PMID:26833706

  5. Fast, Parallel and Secure Cryptography Algorithm Using Lorenz's Attractor

    NASA Astrophysics Data System (ADS)

    Marco, Anderson Gonçalves; Martinez, Alexandre Souto; Bruno, Odemir Martinez

    A novel cryptography method based on the Lorenz's attractor chaotic system is presented. The proposed algorithm is secure and fast, making it practical for general use. We introduce the chaotic operation mode, which provides an interaction among the password, message and a chaotic system. It ensures that the algorithm yields a secure codification, even if the nature of the chaotic system is known. The algorithm has been implemented in two versions: one sequential and slow and the other, parallel and fast. Our algorithm assures the integrity of the ciphertext (we know if it has been altered, which is not assured by traditional algorithms) and consequently its authenticity. Numerical experiments are presented, discussed and show the behavior of the method in terms of security and performance. The fast version of the algorithm has a performance comparable to AES, a popular cryptography program used commercially nowadays, but it is more secure, which makes it immediately suitable for general purpose cryptography applications. An internet page has been set up, which enables the readers to test the algorithm and also to try to break into the cipher.

  6. The dynamic Allan variance II: a fast computational algorithm.

    PubMed

    Galleani, Lorenzo

    2010-01-01

    The stability of an atomic clock can change with time due to several factors, such as temperature, humidity, radiations, aging, and sudden breakdowns. The dynamic Allan variance, or DAVAR, is a representation of the time-varying stability of an atomic clock, and it can be used to monitor the clock behavior. Unfortunately, the computational time of the DAVAR grows very quickly with the length of the analyzed time series. In this article, we present a fast algorithm for the computation of the DAVAR, and we also extend it to the case of missing data. Numerical simulations show that the fast algorithm dramatically reduces the computational time. The fast algorithm is useful when the analyzed time series is long, or when many clocks must be monitored, or when the computational power is low, as happens onboard satellites and space probes.

  7. A fast portable implementation of the Secure Hash Algorithm, III.

    SciTech Connect

    McCurley, Kevin S.

    1992-10-01

    In 1992, NIST announced a proposed standard for a collision-free hash function. The algorithm for producing the hash value is known as the Secure Hash Algorithm (SHA), and the standard using the algorithm in known as the Secure Hash Standard (SHS). Later, an announcement was made that a scientist at NSA had discovered a weakness in the original algorithm. A revision to this standard was then announced as FIPS 180-1, and includes a slight change to the algorithm that eliminates the weakness. This new algorithm is called SHA-1. In this report we describe a portable and efficient implementation of SHA-1 in the C language. Performance information is given, as well as tips for porting the code to other architectures. We conclude with some observations on the efficiency of the algorithm, and a discussion of how the efficiency of SHA might be improved.

  8. Fast algorithm for relaxation processes in big-data systems

    NASA Astrophysics Data System (ADS)

    Hwang, S.; Lee, D.-S.; Kahng, B.

    2014-10-01

    Relaxation processes driven by a Laplacian matrix can be found in many real-world big-data systems, for example, in search engines on the World Wide Web and the dynamic load-balancing protocols in mesh networks. To numerically implement such processes, a fast-running algorithm for the calculation of the pseudoinverse of the Laplacian matrix is essential. Here we propose an algorithm which computes quickly and efficiently the pseudoinverse of Markov chain generator matrices satisfying the detailed-balance condition, a general class of matrices including the Laplacian. The algorithm utilizes the renormalization of the Gaussian integral. In addition to its applicability to a wide range of problems, the algorithm outperforms other algorithms in its ability to compute within a manageable computing time arbitrary elements of the pseudoinverse of a matrix of size millions by millions. Therefore our algorithm can be used very widely in analyzing the relaxation processes occurring on large-scale networked systems.

  9. A fast and accurate algorithm for diploid individual haplotype reconstruction.

    PubMed

    Wu, Jingli; Liang, Binbin

    2013-08-01

    Haplotypes can provide significant information in many research fields, including molecular biology and medical therapy. However, haplotyping is much more difficult than genotyping by using only biological techniques. With the development of sequencing technologies, it becomes possible to obtain haplotypes by combining sequence fragments. The haplotype reconstruction problem of diploid individual has received considerable attention in recent years. It assembles the two haplotypes for a chromosome given the collection of fragments coming from the two haplotypes. Fragment errors significantly increase the difficulty of the problem, and which has been shown to be NP-hard. In this paper, a fast and accurate algorithm, named FAHR, is proposed for haplotyping a single diploid individual. Algorithm FAHR reconstructs the SNP sites of a pair of haplotypes one after another. The SNP fragments that cover some SNP site are partitioned into two groups according to the alleles of the corresponding SNP site, and the SNP values of the pair of haplotypes are ascertained by using the fragments in the group that contains more SNP fragments. The experimental comparisons were conducted among the FAHR, the Fast Hare and the DGS algorithms by using the haplotypes on chromosome 1 of 60 individuals in CEPH samples, which were released by the International HapMap Project. Experimental results under different parameter settings indicate that the reconstruction rate of the FAHR algorithm is higher than those of the Fast Hare and the DGS algorithms, and the running time of the FAHR algorithm is shorter than those of the Fast Hare and the DGS algorithms. Moreover, the FAHR algorithm has high efficiency even for the reconstruction of long haplotypes and is very practical for realistic applications.

  10. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  11. A fast directional algorithm for high-frequency electromagnetic scattering

    SciTech Connect

    Tsuji, Paul; Ying Lexing

    2011-06-20

    This paper is concerned with the fast solution of high-frequency electromagnetic scattering problems using the boundary integral formulation. We extend the O(N log N) directional multilevel algorithm previously proposed for the acoustic scattering case to the vector electromagnetic case. We also detail how to incorporate the curl operator of the magnetic field integral equation into the algorithm. When combined with a standard iterative method, this results in an almost linear complexity solver for the combined field integral equations. In addition, the butterfly algorithm is utilized to compute the far field pattern and radar cross section with O(N log N) complexity.

  12. Fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1986-01-01

    A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

  13. A fast quantum mechanics based contour extraction algorithm

    NASA Astrophysics Data System (ADS)

    Lan, Tian; Sun, Yangguang; Ding, Mingyue

    2009-02-01

    A fast algorithm was proposed to decrease the computational cost of the contour extraction approach based on quantum mechanics. The contour extraction approach based on quantum mechanics is a novel method proposed recently by us, which will be presented on the same conference by another paper of us titled "a statistical approach to contour extraction based on quantum mechanics". In our approach, contour extraction was modeled as the locus of a moving particle described by quantum mechanics, which is obtained by the most probable locus of the particle simulated in a large number of iterations. In quantum mechanics, the probability that a particle appears at a point is equivalent to the square amplitude of the wave function. Furthermore, the expression of the wave function can be derived from digital images, making the probability of the locus of a particle available. We employed the Markov Chain Monte Carlo (MCMC) method to estimate the square amplitude of the wave function. Finally, our fast quantum mechanics based contour extraction algorithm (referred as our fast algorithm hereafter) was evaluated by a number of different images including synthetic and medical images. It was demonstrated that our fast algorithm can achieve significant improvements in accuracy and robustness compared with the well-known state-of-the-art contour extraction techniques and dramatic reduction of time complexity compared to the statistical approach to contour extraction based on quantum mechanics.

  14. MATLAB tensor classes for fast algorithm prototyping : source code.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.

  15. A fast algorithm for sparse matrix computations related to inversion

    NASA Astrophysics Data System (ADS)

    Li, S.; Wu, W.; Darve, E.

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green's functions Gr and G< for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors

  16. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  17. The Empirical Mode Decomposition algorithm via Fast Fourier Transform

    NASA Astrophysics Data System (ADS)

    Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Artemyev, Dmitry N.; Khramov, Alexander G.

    2014-09-01

    In this paper we consider a problem of implementing a fast algorithm for the Empirical Mode Decomposition (EMD). EMD is one of the newest methods for decomposition of non-linear and non-stationary signals. A basis of EMD is formed "on-the-fly", i.e. it depends from a distribution of the signal and not given a priori in contrast on cases Fourier Transform (FT) or Wavelet Transform (WT). The EMD requires interpolating of local extrema sets of signal to find upper and lower envelopes. The data interpolation on an irregular lattice is a very low-performance procedure. A classical description of EMD by Huang suggests doing this through splines, i.e. through solving of a system of equations. Existence of a fast algorithm is the main advantage of the FT. A simple description of an algorithm in terms of Fast Fourier Transform (FFT) is a standard practice to reduce operation's count. We offer a fast implementation of EMD (FEMD) through FFT and some other cost-efficient algorithms. Basic two-stage interpolation algorithm for EMD is composed of a Upscale procedure through FFT and Downscale procedure through a selection procedure for signal's points. First we consider the local maxima (or minima) set without reference to the axis OX, i.e. on a regular lattice. The Upscale through the FFT change the signal's length to the Least Common Multiple (LCM) value of all distances between neighboring extremes on the axis OX. If the LCM value is too large then it is necessary to limit local set of extrema. In this case it is an analog of the spline interpolation. A demo for FEMD in noise reduction task for OCT has been shown.

  18. Improved genetic algorithm for fast path planning of USV

    NASA Astrophysics Data System (ADS)

    Cao, Lu

    2015-12-01

    Due to the complex constraints, more uncertain factors and critical real-time demand of path planning for USV(Unmanned Surface Vehicle), an approach of fast path planning based on voronoi diagram and improved Genetic Algorithm is proposed, which makes use of the principle of hierarchical path planning. First the voronoi diagram is utilized to generate the initial paths and then the optimal path is searched by using the improved Genetic Algorithm, which use multiprocessors parallel computing techniques to improve the traditional genetic algorithm. Simulation results verify that the optimal time is greatly reduced and path planning based on voronoi diagram and the improved Genetic Algorithm is more favorable in the real-time operation.

  19. A fast hidden line algorithm for plotting finite element models

    NASA Technical Reports Server (NTRS)

    Jones, G. K.

    1982-01-01

    Effective plotting of finite element models requires the use of fast hidden line plot techniques that provide interactive response. A high speed hidden line technique was developed to facilitate the plotting of NASTRAN finite element models. Based on testing using 14 different models, the new hidden line algorithm (JONES-D) appears to be very fast: its speed equals that for normal (all lines visible) plotting and when compared to other existing methods it appears to be substantially faster. It also appears to be very reliable: no plot errors were observed using the new method to plot NASTRAN models. The new algorithm was made part of the NPLOT NASTRAN plot package and was used by structural analysts for normal production tasks.

  20. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  1. CLAMMS: a scalable algorithm for calling common and rare copy number variants from exome sequencing data

    PubMed Central

    Packer, Jonathan S.; Maxwell, Evan K.; O’Dushlaine, Colm; Lopez, Alexander E.; Dewey, Frederick E.; Chernomorsky, Rostislav; Baras, Aris; Overton, John D.; Habegger, Lukas; Reid, Jeffrey G.

    2016-01-01

    Motivation: Several algorithms exist for detecting copy number variants (CNVs) from human exome sequencing read depth, but previous tools have not been well suited for large population studies on the order of tens or hundreds of thousands of exomes. Their limitations include being difficult to integrate into automated variant-calling pipelines and being ill-suited for detecting common variants. To address these issues, we developed a new algorithm—Copy number estimation using Lattice-Aligned Mixture Models (CLAMMS)—which is highly scalable and suitable for detecting CNVs across the whole allele frequency spectrum. Results: In this note, we summarize the methods and intended use-case of CLAMMS, compare it to previous algorithms and briefly describe results of validation experiments. We evaluate the adherence of CNV calls from CLAMMS and four other algorithms to Mendelian inheritance patterns on a pedigree; we compare calls from CLAMMS and other algorithms to calls from SNP genotyping arrays for a set of 3164 samples; and we use TaqMan quantitative polymerase chain reaction to validate CNVs predicted by CLAMMS at 39 loci (95% of rare variants validate; across 19 common variant loci, the mean precision and recall are 99% and 94%, respectively). In the Supplementary Materials (available at the CLAMMS Github repository), we present our methods and validation results in greater detail. Availability and implementation: https://github.com/rgcgithub/clamms (implemented in C). Contact: jeffrey.reid@regeneron.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26382196

  2. SMG: Fast scalable greedy algorithm for influence maximization in social networks

    NASA Astrophysics Data System (ADS)

    Heidari, Mehdi; Asadpour, Masoud; Faili, Hesham

    2015-02-01

    Influence maximization is the problem of finding k most influential nodes in a social network. Many works have been done in two different categories, greedy approaches and heuristic approaches. The greedy approaches have better influence spread, but lower scalability on large networks. The heuristic approaches are scalable and fast but not for all type of networks. Improving the scalability of greedy approach is still an open and hot issue. In this work we present a fast greedy algorithm called State Machine Greedy that improves the existing algorithms by reducing calculations in two parts: (1) counting the traversing nodes in estimate propagation procedure, (2) Monte-Carlo graph construction in simulation of diffusion. The results show that our method makes a huge improvement in the speed over the existing greedy approaches.

  3. A fast learning algorithm for deep belief nets.

    PubMed

    Hinton, Geoffrey E; Osindero, Simon; Teh, Yee-Whye

    2006-07-01

    We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

  4. A Fast Conformal Mapping Algorithm with No FFT

    NASA Astrophysics Data System (ADS)

    Luchini, P.; Manzo, F.

    1992-08-01

    An algorithm is presented for the computation of a conformal mapping discretized on a non-uniformly spaced point set, useful for the numerical solution of many problems of fluid dynamics. Most existing iterative techniques, both those having a linear and those having a quadratic type of convergence, rely on the fast Fourier transform ( FFT) algorithm for calculating a convolution integral which represents the most time-consuming phase of the computation. The FFT, however, definitely cannot be applied to a non-uniform spacing. The algorithm presented in this paper has been made possible by the construction of a calculation method for convolution integrals which, despite not using an FFT, maintains a computation time of the same order as that of the FFT. The new technique is successfully applied to the problem of conformally mapping a closely spaced cascade of airfoils onto a circle, which requires an exceedingly large number of points if it is solved with uniform spacing.

  5. Fast algorithm for calculating chemical kinetics in turbulent reacting flow

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.; Pratt, D. T.

    1986-01-01

    This paper addresses the need for a fast batch chemistry solver to perform the kinetics part of a split operator formulation of turbulent reacting flows, with special attention focused on the solution of the ordinary differential equations governing a homogeneous gas-phase chemical reaction. For this purpose, a two-part predictor-corrector algorithm which incorporates an exponentially fitted trapezoidal method was developed. The algorithm performs filtering of ill-posed initial conditions, automatic step-size selection, and automatic selection of Jacobi-Newton or Newton-Raphson iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm, termed CREK1D (combustion reaction kinetics, one-dimensional), compared favorably with the code LSODE when tested on two representative problems drawn from combustion kinetics, and is faster than LSODE.

  6. A fast image encryption algorithm based on chaotic map

    NASA Astrophysics Data System (ADS)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  7. A fast marching algorithm for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Treister, Eran; Haber, Eldad

    2016-11-01

    The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. This inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss-Newton.

  8. A novel fast median filter algorithm without sorting

    NASA Astrophysics Data System (ADS)

    Yang, Weiping; Zhang, Zhilong; Lu, Xinping; Li, Jicheng; Chen, Dong; Yang, Guopeng

    2016-04-01

    As one of widely applied nonlinear smoothing filtering methods, median filter is quite effective for removing salt-andpepper noise and impulsive noise while maintaining image edge information without blurring its boundaries, but its computation load is the maximal drawback while applied in real-time processing systems. In order to solve the issue, researchers have proposed many effective fast algorithms and published many papers. However most of the algorithms are based on sorting operations so as to make real-time implementation difficult. In this paper considering the large scale Boolean calculation function and convenient shift operation which are two of the advantages of FPGA(Field Programmable Gate Array), we proposed a novel median value finding algorithm without sorting, which can find the median value effectively and its performing time almost keeps changeless despite how large the filter radius is. Based on the algorithm, a real-time median filter has been realized. A lot of tests demonstrate the validity and correctness of proposed algorithm.

  9. A fast contour descriptor algorithm for supernova imageclassification

    SciTech Connect

    Aragon, Cecilia R.; Aragon, David Bradburn

    2006-07-16

    We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape-detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.

  10. A non-parametric peak calling algorithm for DamID-Seq.

    PubMed

    Li, Renhua; Hempel, Leonie U; Jiang, Tingbo

    2015-01-01

    Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width.

  11. A non-parametric peak calling algorithm for DamID-Seq.

    PubMed

    Li, Renhua; Hempel, Leonie U; Jiang, Tingbo

    2015-01-01

    Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width. PMID:25785608

  12. A fast direct sampling algorithm for equilateral closed polygons

    NASA Astrophysics Data System (ADS)

    Cantarella, Jason; Duplantier, Bertrand; Shonkwiler, Clayton; Uehara, Erica

    2016-07-01

    Sampling equilateral closed polygons is of interest in the statistical study of ring polymers. Over the past 30 years, previous authors have proposed a variety of simple Markov chain algorithms (but have not been able to show that they converge to the correct probability distribution) and complicated direct samplers (which require extended-precision arithmetic to evaluate numerically unstable polynomials). We present a simple direct sampler which is fast and numerically stable, and analyze its runtime using a new formula for the volume of equilateral polygon space as a Dirichlet-type integral.

  13. A fast-marching like algorithm for geometrical shock dynamics

    NASA Astrophysics Data System (ADS)

    Noumir, Y.; Le Guilcher, A.; Lardjane, N.; Monneau, R.; Sarrazin, A.

    2015-03-01

    We develop a new algorithm for the computation of the Geometrical Shock Dynamics (GSD) model. The method relies on the fast-marching paradigm and enables the discrete evaluation of the first arrival time of a shock wave and its local velocity on a Cartesian grid. The proposed algorithm is based on a first order upwind finite difference scheme and reduces to a local nonlinear system of two equations solved by an iterative procedure. Reference solutions are built for a smooth radial configuration and for the 2D Riemann problem. The link between the GSD model and p-systems is given. Numerical experiments demonstrate the efficiency of the scheme and its ability to handle singularities.

  14. Visual gaze behavior of near-expert and expert fast pitch softball umpires calling a pitch.

    PubMed

    Millslagle, Duane G; Smith, Melissa S; Hines, Bridget B

    2013-05-01

    The purpose of this study was to examine the difference in visual gaze behavior between near expert (NE) and expert (E) umpires in a simulated pitch-hit situation in fast pitch softball. An Applied Science Laboratory mobile eye tracker was worn by 4 NE and 4 E fast pitch umpires and recorded their visual gaze behavior while following pitches (internal view). A digital camera located behind the pitcher recorded the external view of the pitcher, hitter, catcher, and umpire actions for each pitch. The internal and external video clips of 10 representative pitches--5 balls and 5 strikes--were synchronized and displayed in a split screen and were then coded for statistical analyses using Quiet eye solution software. Analysis of variance and multivariate analysis of variance statistical analyses of the umpires' gaze behavior during onset, duration, offset, and frequency (fixation/pursuit tracking, saccades, and blinks) were conducted between and within the 5 stages (pitcher's preparation, delivery and release, ball in flight, and umpire call) by umpire's skill level. Significant differences (p < 0.05) observed for combined gaze behavior frequency, type of gaze by phase, quiet eye duration and onset, and ball duration tracking indicated that E umpires' visual control was more stable and economical than NE umpires. Quiet eye significant results indicated that E umpires had an earlier onset (mean = 50.0 ± 13.9% vs. 56 ± 9.5%) and longer duration (mean = 15.1 ± 11.3% vs. 9.3 ± 6.5%) of the pitcher's release area than NE umpires. These findings suggest that gaze behavior of expert fast pitch umpires was more economical, fixated earlier and for a longer period of time on the area where the ball would be released, and was able to track the ball earlier and for a longer period of time. PMID:22836605

  15. Visual gaze behavior of near-expert and expert fast pitch softball umpires calling a pitch.

    PubMed

    Millslagle, Duane G; Smith, Melissa S; Hines, Bridget B

    2013-05-01

    The purpose of this study was to examine the difference in visual gaze behavior between near expert (NE) and expert (E) umpires in a simulated pitch-hit situation in fast pitch softball. An Applied Science Laboratory mobile eye tracker was worn by 4 NE and 4 E fast pitch umpires and recorded their visual gaze behavior while following pitches (internal view). A digital camera located behind the pitcher recorded the external view of the pitcher, hitter, catcher, and umpire actions for each pitch. The internal and external video clips of 10 representative pitches--5 balls and 5 strikes--were synchronized and displayed in a split screen and were then coded for statistical analyses using Quiet eye solution software. Analysis of variance and multivariate analysis of variance statistical analyses of the umpires' gaze behavior during onset, duration, offset, and frequency (fixation/pursuit tracking, saccades, and blinks) were conducted between and within the 5 stages (pitcher's preparation, delivery and release, ball in flight, and umpire call) by umpire's skill level. Significant differences (p < 0.05) observed for combined gaze behavior frequency, type of gaze by phase, quiet eye duration and onset, and ball duration tracking indicated that E umpires' visual control was more stable and economical than NE umpires. Quiet eye significant results indicated that E umpires had an earlier onset (mean = 50.0 ± 13.9% vs. 56 ± 9.5%) and longer duration (mean = 15.1 ± 11.3% vs. 9.3 ± 6.5%) of the pitcher's release area than NE umpires. These findings suggest that gaze behavior of expert fast pitch umpires was more economical, fixated earlier and for a longer period of time on the area where the ball would be released, and was able to track the ball earlier and for a longer period of time.

  16. Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen

    2011-08-01

    Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.

  17. Fast Field Calibration of MIMU Based on the Powell Algorithm

    PubMed Central

    Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang

    2014-01-01

    The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801

  18. A fast poly-energetic iterative FBP algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Samei, Ehsan

    2014-04-01

    The beam hardening (BH) effect can influence medical interpretations in two notable ways. First, high attenuation materials, such as bones, can induce strong artifacts, which severely deteriorate the image quality. Second, voxel values can significantly deviate from the real values, which can lead to unreliable quantitative evaluation results. Some iterative methods have been proposed to eliminate the BH effect, but they cannot be widely applied for clinical practice because of the slow computational speed. The purpose of this study was to develop a new fast and practical poly-energetic iterative filtered backward projection algorithm (piFBP). The piFBP is composed of a novel poly-energetic forward projection process and a robust FBP-type backward updating process. In the forward projection process, an adaptive base material decomposition method is presented, based on which diverse body tissues (e.g., lung, fat, breast, soft tissue, and bone) and metal implants can be incorporated to accurately evaluate poly-energetic forward projections. In the backward updating process, one robust and fast FBP-type backward updating equation with a smoothing kernel is introduced to avoid the noise accumulation in the iteration process and to improve the convergence properties. Two phantoms were designed to quantitatively validate our piFBP algorithm in terms of the beam hardening index (BIdx) and the noise index (NIdx). The simulation results showed that piFBP possessed fast convergence speed, as the images could be reconstructed within four iterations. The variation range of the BIdx's of various tissues across phantom size and spectrum were reduced from [-7.5, 17.5] for FBP to [-0.1, 0.1] for piFBP while the NIdx's were maintained in the same low level (about [0.3, 1.7]). When a metal implant presented in a complex phantom, piFBP still had excellent reconstruction performance, as the variation range of the BIdx's of body tissues were reduced from [-2.9, 15.9] for FBP to [-0

  19. Fast Dating Using Least-Squares Criteria and Algorithms.

    PubMed

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  20. Fast Dating Using Least-Squares Criteria and Algorithms.

    PubMed

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  1. Fast Dating Using Least-Squares Criteria and Algorithms

    PubMed Central

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley–Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley–Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to

  2. A fast sorting algorithm for a hypersonic rarefied flow particle simulation on the connection machine

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1989-01-01

    The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.

  3. Fast imaging system and algorithm for monitoring microlymphatics

    NASA Astrophysics Data System (ADS)

    Akl, T.; Rahbar, E.; Zawieja, D.; Gashev, A.; Moore, J.; Coté, G.

    2010-02-01

    The lymphatic system is not well understood and tools to quantify aspects of its behavior are needed. A technique to monitor lymph velocity that can lead to flow, the main determinant of transport, in a near real time manner can be extremely valuable. We recently built a new system that measures lymph velocity, vessel diameter and contractions using optical microscopy digital imaging with a high speed camera (500fps) and a complex processing algorithm. The processing time for a typical data period was significantly reduced to less than 3 minutes in comparison to our previous system in which readings were available 30 minutes after the vessels were imaged. The processing was based on a correlation algorithm in the frequency domain, which, along with new triggering methods, reduced the processing and acquisition time significantly. In addition, the use of a new data filtering technique allowed us to acquire results from recordings that were irresolvable by the previous algorithm due to their high noise level. The algorithm was tested by measuring velocities and diameter changes in rat mesenteric micro-lymphatics. We recorded velocities of 0.25mm/s on average in vessels of diameter ranging from 54um to 140um with phasic contraction strengths of about 6 to 40%. In the future, this system will be used to monitor acute effects that are too fast for previous systems and will also increase the statistical power when dealing with chronic changes. Furthermore, we plan on expanding its functionality to measure the propagation of the contractile activity.

  4. Fast Adapting Ensemble: A New Algorithm for Mining Data Streams with Concept Drift

    PubMed Central

    Ortíz Díaz, Agustín; Ramos-Jiménez, Gonzalo; Frías Blanco, Isvani; Caballero Mota, Yailé; Morales-Bueno, Rafael

    2015-01-01

    The treatment of large data streams in the presence of concept drifts is one of the main challenges in the field of data mining, particularly when the algorithms have to deal with concepts that disappear and then reappear. This paper presents a new algorithm, called Fast Adapting Ensemble (FAE), which adapts very quickly to both abrupt and gradual concept drifts, and has been specifically designed to deal with recurring concepts. FAE processes the learning examples in blocks of the same size, but it does not have to wait for the batch to be complete in order to adapt its base classification mechanism. FAE incorporates a drift detector to improve the handling of abrupt concept drifts and stores a set of inactive classifiers that represent old concepts, which are activated very quickly when these concepts reappear. We compare our new algorithm with various well-known learning algorithms, taking into account, common benchmark datasets. The experiments show promising results from the proposed algorithm (regarding accuracy and runtime), handling different types of concept drifts. PMID:25879051

  5. A fast algorithm for the phonemic segmentation of continuous speech

    NASA Astrophysics Data System (ADS)

    Smidt, D.

    1986-04-01

    The method of differential learning (DL method) was applied to the fast phonemic classification of acoustic speech spectra. The method was also tested with a simple algorithm for continuous speech recognition. In every learning step of the DL method only that single pattern component which deviates most from the reference value is used for a new rule. Several rules of this type were connected in a conjunctive or disjunctive way. Tests with a single speaker demonstrate good classification capability and a very high speed. The inclusion of automatically additional features selected according to their relevance is discussed. It is shown that there exists a correspondence between processes related to the DL method and pattern recognition in living beings with their ability for generalization and differentiation.

  6. A fast Monte Carlo algorithm for source localization on graphs

    NASA Astrophysics Data System (ADS)

    Agaskar, Ameya; Lu, Yue M.

    2013-09-01

    Epidemic models on networks have long been studied by biologists and social sciences to determine the steady state levels of an infection on a network. Recently, however, several authors have begun considering the more difficult problem of estimating the source of an infection given information about its behavior some time after the initial infection. In this paper, we describe a technique to estimate the source of an infection on a general graph based on observations from a small set of observers during a fixed time window at some unknown time after the initial infection. We describe an alternate representation for the susceptible-infected (SI) infection model based on geodesic distances on a randomly-weighted version of the graph; this representation allows us to exploit fast algorithms to compute geodesic distances to estimate the marginal distributions for each observer and compute a pseudo-likelihood function that is maximized to find the source.

  7. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.

    PubMed

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-01-01

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy. PMID:26287198

  8. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor

    PubMed Central

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-01-01

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University’s datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy. PMID:26287198

  9. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.

    PubMed

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-08-14

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy.

  10. Distress Calls of a Fast-Flying Bat (Molossus molossus) Provoke Inspection Flights but Not Cooperative Mobbing.

    PubMed

    Carter, Gerald; Schoeppler, Diana; Manthey, Marie; Knörnschild, Mirjam; Denzinger, Annette

    2015-01-01

    Many birds and mammals produce distress calls when captured. Bats often approach speakers playing conspecific distress calls, which has led to the hypothesis that bat distress calls promote cooperative mobbing. An alternative explanation is that approaching bats are selfishly assessing predation risk. Previous playback studies on bat distress calls involved species with highly maneuverable flight, capable of making close passes and tight circles around speakers, which can look like mobbing. We broadcast distress calls recorded from the velvety free-tailed bat, Molossus molossus, a fast-flying aerial-hawker with relatively poor maneuverability. Based on their flight behavior, we predicted that, in response to distress call playbacks, M. molossus would make individual passing inspection flights but would not approach in groups or approach within a meter of the distress call source. By recording responses via ultrasonic recording and infrared video, we found that M. molossus, and to a lesser extent Saccopteryx bilineata, made more flight passes during distress call playbacks compared to noise. However, only the more maneuverable S. bilineata made close approaches to the speaker, and we found no evidence of mobbing in groups. Instead, our findings are consistent with the hypothesis that single bats approached distress calls simply to investigate the situation. These results suggest that approaches by bats to distress calls should not suffice as clear evidence for mobbing. PMID:26353118

  11. Distress Calls of a Fast-Flying Bat (Molossus molossus) Provoke Inspection Flights but Not Cooperative Mobbing

    PubMed Central

    Carter, Gerald; Schoeppler, Diana; Manthey, Marie; Knörnschild, Mirjam; Denzinger, Annette

    2015-01-01

    Many birds and mammals produce distress calls when captured. Bats often approach speakers playing conspecific distress calls, which has led to the hypothesis that bat distress calls promote cooperative mobbing. An alternative explanation is that approaching bats are selfishly assessing predation risk. Previous playback studies on bat distress calls involved species with highly maneuverable flight, capable of making close passes and tight circles around speakers, which can look like mobbing. We broadcast distress calls recorded from the velvety free-tailed bat, Molossus molossus, a fast-flying aerial-hawker with relatively poor maneuverability. Based on their flight behavior, we predicted that, in response to distress call playbacks, M. molossus would make individual passing inspection flights but would not approach in groups or approach within a meter of the distress call source. By recording responses via ultrasonic recording and infrared video, we found that M. molossus, and to a lesser extent Saccopteryx bilineata, made more flight passes during distress call playbacks compared to noise. However, only the more maneuverable S. bilineata made close approaches to the speaker, and we found no evidence of mobbing in groups. Instead, our findings are consistent with the hypothesis that single bats approached distress calls simply to investigate the situation. These results suggest that approaches by bats to distress calls should not suffice as clear evidence for mobbing. PMID:26353118

  12. Biased Randomized Algorithm for Fast Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Williams, Colin; Vartan, Farrokh

    2005-01-01

    A biased randomized algorithm has been developed to enable the rapid computational solution of a propositional- satisfiability (SAT) problem equivalent to a diagnosis problem. The closest competing methods of automated diagnosis are described in the preceding article "Fast Algorithms for Model-Based Diagnosis" and "Two Methods of Efficient Solution of the Hitting-Set Problem" (NPO-30584), which appears elsewhere in this issue. It is necessary to recapitulate some of the information from the cited articles as a prerequisite to a description of the present method. As used here, "diagnosis" signifies, more precisely, a type of model-based diagnosis in which one explores any logical inconsistencies between the observed and expected behaviors of an engineering system. The function of each component and the interconnections among all the components of the engineering system are represented as a logical system. Hence, the expected behavior of the engineering system is represented as a set of logical consequences. Faulty components lead to inconsistency between the observed and expected behaviors of the system, represented by logical inconsistencies. Diagnosis - the task of finding the faulty components - reduces to finding the components, the abnormalities of which could explain all the logical inconsistencies. One seeks a minimal set of faulty components (denoted a minimal diagnosis), because the trivial solution, in which all components are deemed to be faulty, always explains all inconsistencies. In the methods of the cited articles, the minimal-diagnosis problem is treated as equivalent to a minimal-hitting-set problem, which is translated from a combinatorial to a computational problem by mapping it onto the Boolean-satisfiability and integer-programming problems. The integer-programming approach taken in one of the prior methods is complete (in the sense that it is guaranteed to find a solution if one exists) and slow and yields a lower bound on the size of the

  13. An algorithm for fast DNS cavitating flows simulations using homogeneous mixture approach

    NASA Astrophysics Data System (ADS)

    Žnidarčič, A.; Coutier-Delgosha, O.; Marquillie, M.; Dular, M.

    2015-12-01

    A new algorithm for fast DNS cavitating flows simulations is developed. The algorithm is based on Kim and Moin projection method form. Homogeneous mixture approach with transport equation for vapour volume fraction is used to model cavitation and various cavitation models can be used. Influence matrix and matrix diagonalisation technique enable fast parallel computations.

  14. A fast and memory-sparing probabilistic selection algorithm for the GPU

    SciTech Connect

    Monroe, Laura M; Wendelberger, Joanne; Michalak, Sarah

    2010-09-29

    A fast and memory-sparing probabilistic top-N selection algorithm is implemented on the GPU. This probabilistic algorithm gives a deterministic result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces both the memory requirements and the average time required for the algorithm. This algorithm is well-suited to more general parallel processors with multiple layers of memory hierarchy. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be especially useful for processors having a limited amount of fast memory available.

  15. A fast look-up algorithm for detecting repetitive DNA sequences

    SciTech Connect

    Guan, X.; Uberbacher, E.C.

    1996-12-31

    We have presented a fast linear time algorithm for recognizing tandem repeats. Our algorithm is a one pass algorithm. No information about the periodicity of tandem repeats is needed. The use of the indices calculated from non-continuous and overlapping {kappa}-tuples allow tandem repeats with insertions and deletions to be recognized.

  16. Fast and stable algorithms for computing the principal square root of a complex matrix

    NASA Technical Reports Server (NTRS)

    Shieh, Leang S.; Lian, Sui R.; Mcinnis, Bayliss C.

    1987-01-01

    This note presents recursive algorithms that are rapidly convergent and more stable for finding the principal square root of a complex matrix. Also, the developed algorithms are utilized to derive the fast and stable matrix sign algorithms which are useful in developing applications to control system problems.

  17. FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory

    NASA Astrophysics Data System (ADS)

    McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.; Blazek, Jonathan A.

    2016-09-01

    We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theory and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.

  18. Fast inverse scattering solutions using the distorted Born iterative method and the multilevel fast multipole algorithm

    PubMed Central

    Hesford, Andrew J.; Chew, Weng C.

    2010-01-01

    The distorted Born iterative method (DBIM) computes iterative solutions to nonlinear inverse scattering problems through successive linear approximations. By decomposing the scattered field into a superposition of scattering by an inhomogeneous background and by a material perturbation, large or high-contrast variations in medium properties can be imaged through iterations that are each subject to the distorted Born approximation. However, the need to repeatedly compute forward solutions still imposes a very heavy computational burden. To ameliorate this problem, the multilevel fast multipole algorithm (MLFMA) has been applied as a forward solver within the DBIM. The MLFMA computes forward solutions in linear time for volumetric scatterers. The typically regular distribution and shape of scattering elements in the inverse scattering problem allow the method to take advantage of data redundancy and reduce the computational demands of the normally expensive MLFMA setup. Additional benefits are gained by employing Kaczmarz-like iterations, where partial measurements are used to accelerate convergence. Numerical results demonstrate both the efficiency of the forward solver and the successful application of the inverse method to imaging problems with dimensions in the neighborhood of ten wavelengths. PMID:20707438

  19. Comparative Analysis of CNV Calling Algorithms: Literature Survey and a Case Study Using Bovine High-Density SNP Data

    PubMed Central

    Xu, Lingyang; Hou, Yali; Bickhart, Derek M.; Song, Jiuzhou; Liu, George E.

    2013-01-01

    Copy number variations (CNVs) are gains and losses of genomic sequence between two individuals of a species when compared to a reference genome. The data from single nucleotide polymorphism (SNP) microarrays are now routinely used for genotyping, but they also can be utilized for copy number detection. Substantial progress has been made in array design and CNV calling algorithms and at least 10 comparison studies in humans have been published to assess them. In this review, we first survey the literature on existing microarray platforms and CNV calling algorithms. We then examine a number of CNV calling tools to evaluate their impacts using bovine high-density SNP data. Large incongruities in the results from different CNV calling tools highlight the need for standardizing array data collection, quality assessment and experimental validation. Only after careful experimental design and rigorous data filtering can the impacts of CNVs on both normal phenotypic variability and disease susceptibility be fully revealed.

  20. A fast algorithm for nonnegative matrix factorization and its convergence.

    PubMed

    Li, Li-Xin; Wu, Lin; Zhang, Hui-Sheng; Wu, Fang-Xiang

    2014-10-01

    Nonnegative matrix factorization (NMF) has recently become a very popular unsupervised learning method because of its representational properties of factors and simple multiplicative update algorithms for solving the NMF. However, for the common NMF approach of minimizing the Euclidean distance between approximate and true values, the convergence of multiplicative update algorithms has not been well resolved. This paper first discusses the convergence of existing multiplicative update algorithms. We then propose a new multiplicative update algorithm for minimizing the Euclidean distance between approximate and true values. Based on the optimization principle and the auxiliary function method, we prove that our new algorithm not only converges to a stationary point, but also does faster than existing ones. To verify our theoretical results, the experiments on three data sets have been conducted by comparing our proposed algorithm with other existing methods.

  1. Fast single-pass alignment and variant calling using sequencing data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Sequencing research requires efficient computation. Few programs use already known information about DNA variants when aligning sequence data to the reference map. New program findmap.f90 reads the previous variant list before aligning sequence, calling variant alleles, and summing the allele counts...

  2. [A fast non-local means algorithm for denoising of computed tomography images].

    PubMed

    Kang, Changqing; Cao, Wenping; Fang, Lei; Hua, Li; Cheng, Hong

    2012-11-01

    A fast non-local means image denoising algorithm is presented based on the single motif of existing computed tomography images in medical archiving systems. The algorithm is carried out in two steps of prepossessing and actual possessing. The sample neighborhood database is created via the data structure of locality sensitive hashing in the prepossessing stage. The CT image noise is removed by non-local means algorithm based on the sample neighborhoods accessed fast by locality sensitive hashing. The experimental results showed that the proposed algorithm could greatly reduce the execution time, as compared to NLM, and effectively preserved the image edges and details.

  3. Fast Optimal Load Balancing Algorithms for 1D Partitioning

    SciTech Connect

    Pinar, Ali; Aykanat, Cevdet

    2002-12-09

    One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.

  4. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm. PMID:27610308

  5. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  6. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    PubMed Central

    Liu, Pengyu; Jia, Kebin

    2013-01-01

    A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495

  7. Development of Fast Algorithms Using Recursion, Nesting and Iterations for Computational Electromagnetics

    NASA Technical Reports Server (NTRS)

    Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.

    1995-01-01

    In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.

  8. Vectorized Rebinning Algorithm for Fast Data Down-Sampling

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Aronstein, David; Smith, Jeffrey

    2013-01-01

    A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.

  9. Fast algorithm for automatically computing Strahler stream order

    USGS Publications Warehouse

    Lanfear, Kenneth J.

    1990-01-01

    An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.

  10. A fast neural-network algorithm for VLSI cell placement.

    PubMed

    Aykanat, Cevdet; Bultan, Tevfik; Haritaoğlu, Ismail

    1998-12-01

    Cell placement is an important phase of current VLSI circuit design styles such as standard cell, gate array, and Field Programmable Gate Array (FPGA). Although nondeterministic algorithms such as Simulated Annealing (SA) were successful in solving this problem, they are known to be slow. In this paper, a neural network algorithm is proposed that produces solutions as good as SA in substantially less time. This algorithm is based on Mean Field Annealing (MFA) technique, which was successfully applied to various combinatorial optimization problems. A MFA formulation for the cell placement problem is derived which can easily be applied to all VLSI design styles. To demonstrate that the proposed algorithm is applicable in practice, a detailed formulation for the FPGA design style is derived, and the layouts of several benchmark circuits are generated. The performance of the proposed cell placement algorithm is evaluated in comparison with commercial automated circuit design software Xilinx Automatic Place and Route (APR) which uses SA technique. Performance evaluation is conducted using ACM/SIGDA Design Automation benchmark circuits. Experimental results indicate that the proposed MFA algorithm produces comparable results with APR. However, MFA is almost 20 times faster than APR on the average.

  11. metilene: fast and sensitive calling of differentially methylated regions from bisulfite sequencing data.

    PubMed

    Jühling, Frank; Kretzmer, Helene; Bernhart, Stephan H; Otto, Christian; Stadler, Peter F; Hoffmann, Steve

    2016-02-01

    The detection of differentially methylated regions (DMRs) is a necessary prerequisite for characterizing different epigenetic states. We present a novel program, metilene, to identify DMRs within whole-genome and targeted data with unrivaled specificity and sensitivity. A binary segmentation algorithm combined with a two-dimensional statistical test allows the detection of DMRs in large methylation experiments with multiple groups of samples in minutes rather than days using off-the-shelf hardware. metilene outperforms other state-of-the-art tools for low coverage data and can estimate missing data. Hence, metilene is a versatile tool to study the effect of epigenetic modifications in differentiation/development, tumorigenesis, and systems biology on a global, genome-wide level. Whether in the framework of international consortia with dozens of samples per group, or even without biological replicates, it produces highly significant and reliable results. PMID:26631489

  12. metilene: fast and sensitive calling of differentially methylated regions from bisulfite sequencing data

    PubMed Central

    Jühling, Frank; Kretzmer, Helene; Bernhart, Stephan H.; Otto, Christian; Stadler, Peter F.; Hoffmann, Steve

    2016-01-01

    The detection of differentially methylated regions (DMRs) is a necessary prerequisite for characterizing different epigenetic states. We present a novel program, metilene, to identify DMRs within whole-genome and targeted data with unrivaled specificity and sensitivity. A binary segmentation algorithm combined with a two-dimensional statistical test allows the detection of DMRs in large methylation experiments with multiple groups of samples in minutes rather than days using off-the-shelf hardware. metilene outperforms other state-of-the-art tools for low coverage data and can estimate missing data. Hence, metilene is a versatile tool to study the effect of epigenetic modifications in differentiation/development, tumorigenesis, and systems biology on a global, genome-wide level. Whether in the framework of international consortia with dozens of samples per group, or even without biological replicates, it produces highly significant and reliable results. PMID:26631489

  13. A fast readout algorithm for Cluster Counting/Timing drift chambers on a FPGA board

    NASA Astrophysics Data System (ADS)

    Cappelli, L.; Creti, P.; Grancagnolo, F.; Pepino, A.; Tassielli, G.

    2013-08-01

    A fast readout algorithm for Cluster Counting and Timing purposes has been implemented and tested on a Virtex 6 core FPGA board. The algorithm analyses and stores data coming from a Helium based drift tube instrumented by 1 GSPS fADC and represents the outcome of balancing between cluster identification efficiency and high speed performance. The algorithm can be implemented in electronics boards serving multiple fADC channels as an online preprocessing stage for drift chamber signals.

  14. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  15. Fast Algorithm for Continuous Monitoring with Ambient Noise

    NASA Astrophysics Data System (ADS)

    Martin, E. R.; Lindsey, N.; Biondi, B. C.; Chang, J. P.; Ajo Franklin, J. B.; Dou, S.; Daley, T. M.; Freifeld, B. M.; Robertson, M.; Ulrich, C.; Wagner, A. M.; Bjella, K.

    2015-12-01

    A common approach to analyzing ambient seismic noise involves O(n^2) pairwise cross-correlations of n sensors. Following cross-correlations the resulting coherent waveforms are then synthesized into a velocity estimate, often in the form of a dispersion image. As we move towards larger surveys and arrays for continuous subsurface monitoring, this computation can become prohibitively expensive. We show that theoretically equivalent results can be achieved by a simple algorithm which skips the cross-correlations, and scales as O(n). Additionally, this algorithm is embarrassingly parallel, and is significantly cheaper than the commonly used algorithms. We demonstrate the algorithm on two field data sets: (1) a continuously recording linear trenched distributed acoustic sensing (DAS) array designed as a pilot test to develop a permafrost thaw monitoring system, and (2) the Long Beach Array, an irregularly spaced 3D array. These results show superior performance in both speed and numerical accuracy. An open-source implementation of this algorithm is available.

  16. Simple, fast codebook training algorithm by entropy sequence for vector quantization

    NASA Astrophysics Data System (ADS)

    Pang, Chao-yang; Yao, Shaowen; Qi, Zhang; Sun, Shi-xin; Liu, Jingde

    2001-09-01

    The traditional training algorithm for vector quantization such as the LBG algorithm uses the convergence of distortion sequence as the condition of the end of algorithm. We presented a novel training algorithm for vector quantization in this paper. The convergence of the entropy sequence of each region sequence is employed as the condition of the end of the algorithm. Compared with the famous LBG algorithm, it is simple, fast and easy to be comprehended and controlled. We test the performance of the algorithm by typical test image Lena and Barb. The result shows that the PSNR difference between the algorithm and LBG is less than 0.1dB, but the running time of it is at most one second of LBG.

  17. Outline of a fast hardware implementation of Winograd's DFT algorithm

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1980-01-01

    The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.

  18. Gradient maintenance: A new algorithm for fast online replanning

    SciTech Connect

    Ahunbay, Ergun E. Li, X. Allen

    2015-06-15

    Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by

  19. A Simple and Fast Spline Filtering Algorithm for Surface Metrology

    PubMed Central

    Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei

    2015-01-01

    Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement. PMID:26958443

  20. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  1. An Iterative CT Reconstruction Algorithm for Fast Fluid Flow Imaging.

    PubMed

    Van Eyndhoven, Geert; Batenburg, K Joost; Kazantsev, Daniil; Van Nieuwenhove, Vincent; Lee, Peter D; Dobson, Katherine J; Sijbers, Jan

    2015-11-01

    The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing, and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique. In this paper, a new iterative CT reconstruction algorithm for improved a temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. First, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Second, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary. Quantitative and qualitative results on different simulation experiments and a real neutron tomography data set show that, in comparison with the state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, the temporal resolution can be substantially increased, and thus fluid flow experiments with faster dynamics can be performed.

  2. Fast algorithms for combustion kinetics calculations: A comparison

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    To identify the fastest algorithm currently available for the numerical integration of chemical kinetic rate equations, several algorithms were examined. Findings to date are summarized. The algorithms examined include two general-purpose codes EPISODE and LSODE and three special-purpose (for chemical kinetic calculations) codes CHEMEQ, CRK1D, and GCKP84. In addition, an explicit Runge-Kutta-Merson differential equation solver (IMSL Routine DASCRU) is used to illustrate the problems associated with integrating chemical kinetic rate equations by a classical method. Algorithms were applied to two test problems drawn from combustion kinetics. These problems included all three combustion regimes: induction, heat release and equilibration. Variations of the temperature and species mole fraction are given with time for test problems 1 and 2, respectively. Both test problems were integrated over a time interval of 1 ms in order to obtain near-equilibration of all species and temperature. Of the codes examined in this study, only CREK1D and GCDP84 were written explicitly for integrating exothermic, non-isothermal combustion rate equations. These therefore have built-in procedures for calculating the temperature.

  3. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  4. A Fast Retrieving Algorithm of Hierarchical Relationships Using Trie Structures.

    ERIC Educational Resources Information Center

    Koyama, Masafumi; Morita, Kazuhiro; Fuketa, Masao; Aoe, Jun-Ichi

    1998-01-01

    Presents a faster method for determining hierarchical relationships in information retrieval by using trie structures instead of a linear storage of a concept code. Highlights include case structures, a knowledge representation for natural-language understanding with semantic constraints; a compression algorithm of tries; and evaluation.…

  5. MDSIMAID: automatic parameter optimization in fast electrostatic algorithms.

    PubMed

    Crocker, Michael S; Hampton, Scott S; Matthey, Thierry; Izaguirre, Jesús A

    2005-07-30

    MDSIMAID is a recommender system that optimizes parallel Particle Mesh Ewald (PME) and both sequential and parallel multigrid (MG) summation fast electrostatic solvers. MDSIMAID optimizes the running time or parallel scalability of these methods within a given error tolerance. MDSIMAID performs a run time constrained search on the parameter space of each method starting from semiempirical performance models. Recommended parameters are presented to the user. MDSIMAID's optimization of MG leads to configurations that are up to 14 times faster or 17 times more accurate than published recommendations. Optimization of PME can improve its parallel scalability, making it run twice as fast in parallel in our tests. MDSIMAID and its Python source code are accessible through a Web portal located at http://mdsimaid.cse.nd.edu.

  6. A Fast MEANSHIFT Algorithm-Based Target Tracking System

    PubMed Central

    Sun, Jian

    2012-01-01

    Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397

  7. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  8. Fast automatic algorithm for bifurcation detection in vascular CTA scans

    NASA Astrophysics Data System (ADS)

    Brozio, Matthias; Gorbunova, Vladlena; Godenschwager, Christian; Beck, Thomas; Bernhardt, Dominik

    2012-02-01

    Endovascular imaging aims at identifying vessels and their branches. Automatic vessel segmentation and bifurcation detection eases both clinical research and routine work. In this article a state of the art bifurcation detection algorithm is developed and applied on vascular computed tomography angiography (CTA) scans to mark the common iliac artery and its branches, the internal and external iliacs. In contrast to other methods our algorithm does not rely on a complete segmentation of a vessel in the 3D volume, but evaluates the cross-sections of the vessel slice by slice. Candidates for vessels are obtained by thresholding, following by 2D connected component labeling and prefiltering by size and position. The remaining candidates are connected in a squared distanced weighted graph. With Dijkstra algorithm the graph is traversed to get candidates for the arteries. We use another set of features considering length and shape of the paths to determine the best candidate and detect the bifurcation. The method was tested on 119 datasets acquired with different CT scanners and varying protocols. Both easy to evaluate datasets with high resolution and no apparent clinical diseases and difficult ones with low resolution, major calcifications, stents or poor contrast between the vessel and surrounding tissue were included. The presented results are promising, in 75.7% of the cases the bifurcation was labeled correctly, and in 82.7% the common artery and one of its branches were assigned correctly. The computation time was on average 0.49 s +/- 0.28 s, close to human interaction time, which makes the algorithm applicable for time-critical applications.

  9. Calculation of Computational Complexity for Radix-2 (p) Fast Fourier Transform Algorithms for Medical Signals.

    PubMed

    Amirfattahi, Rassoul

    2013-10-01

    Owing to its simplicity radix-2 is a popular algorithm to implement fast fourier transform. Radix-2(p) algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, twiddle factor template, in this paper, we propose a method for exact calculation of multiplicative complexity for radix-2(p) algorithms. The methodology is described for radix-2, radix-2 (2) and radix-2 (3) algorithms. Results show that radix-2 (2) and radix-2 (3) have significantly less computational complexity compared with radix-2. Another interesting result is that while the number of complex multiplications in radix-2 (3) algorithm is slightly more than radix-2 (2), the number of real multiplications for radix-2 (3) is less than radix-2 (2). This is because of the twiddle factors in the form of which need less number of real multiplications and are more frequent in radix-2 (3) algorithm.

  10. Parallel algorithms and architectures for very fast AI search

    SciTech Connect

    Gu, J.

    1989-01-01

    A wide range of problems in natural and artificial intelligence, computer vision, computer graphics, database engineering, operations research, symbolic logic, robot manipulation and hardware design automation are special cases of Consistent Labeling Problems (CLP). CLP has long been viewed as an efficient computational model based on a unit constraint relation containing 2N-tuples of units and labels which specifies which N-tuples of labels are compatible with which N-tuples of units. Due to high computation cost and design complexity, most currently best-known algorithms and computer architectures have usually proven infeasible for solving the consistent labeling problems. Efficiency in CLP computation during the last decade has only been improved a few times. This research presents several parallel algorithms and computer architectures for solving CLP within a parallel processing framework. For problems of practical interest, 4 to 10 orders of magnitude of efficiency improvement can be easily reached. Several simple wafer scale computer architectures are given which implement these parallel algorithms at a surprisingly low cost.

  11. Fast time-reversible algorithms for molecular dynamics of rigid-body systems

    NASA Astrophysics Data System (ADS)

    Kajima, Yasuhiro; Hiyama, Miyabi; Ogata, Shuji; Kobayashi, Ryo; Tamura, Tomoyuki

    2012-06-01

    In this paper, we present time-reversible simulation algorithms for rigid bodies in the quaternion representation. By advancing a time-reversible algorithm [Y. Kajima, M. Hiyama, S. Ogata, and T. Tamura, J. Phys. Soc. Jpn. 80, 114002 (2011), 10.1143/JPSJ.80.114002] that requires iterations in calculating the angular velocity at each time step, we propose two kinds of iteration-free fast time-reversible algorithms. They are easily implemented in codes. The codes are compared with that of existing algorithms through demonstrative simulation of a nanometer-sized water droplet to find their stability of the total energy and computation speeds.

  12. Fast time-reversible algorithms for molecular dynamics of rigid-body systems.

    PubMed

    Kajima, Yasuhiro; Hiyama, Miyabi; Ogata, Shuji; Kobayashi, Ryo; Tamura, Tomoyuki

    2012-06-21

    In this paper, we present time-reversible simulation algorithms for rigid bodies in the quaternion representation. By advancing a time-reversible algorithm [Y. Kajima, M. Hiyama, S. Ogata, and T. Tamura, J. Phys. Soc. Jpn. 80, 114002 (2011)] that requires iterations in calculating the angular velocity at each time step, we propose two kinds of iteration-free fast time-reversible algorithms. They are easily implemented in codes. The codes are compared with that of existing algorithms through demonstrative simulation of a nanometer-sized water droplet to find their stability of the total energy and computation speeds. PMID:22779579

  13. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  14. Fast algorithms for classical X-->0 diffusion-reaction processes.

    PubMed

    Thalmann, Fabrice; Lee, Nam-Kyung

    2009-02-21

    The Doi formalism treats a reaction-diffusion process as a quantum many-body problem. We use this second-quantized formulation as a starting point to derive a numerical scheme for simulating X-->0 reaction-diffusion processes, following a well-established time discretization procedure. In the case of a reaction zone localized in the configuration space, this formulation provides also a systematic way of designing an optimized, multiple time step algorithm, spending most of the computation time to sample the configurations where the reaction is likely to occur.

  15. A fast algorithm for the simulation of arterial pulse waves

    NASA Astrophysics Data System (ADS)

    Du, Tao; Hu, Dan; Cai, David

    2016-06-01

    One-dimensional models have been widely used in studies of the propagation of blood pulse waves in large arterial trees. Under a periodic driving of the heartbeat, traditional numerical methods, such as the Lax-Wendroff method, are employed to obtain asymptotic periodic solutions at large times. However, these methods are severely constrained by the CFL condition due to large pulse wave speed. In this work, we develop a new numerical algorithm to overcome this constraint. First, we reformulate the model system of pulse wave propagation using a set of Riemann variables and derive a new form of boundary conditions at the inlet, the outlets, and the bifurcation points of the arterial tree. The new form of the boundary conditions enables us to design a convergent iterative method to enforce the boundary conditions. Then, after exchanging the spatial and temporal coordinates of the model system, we apply the Lax-Wendroff method in the exchanged coordinate system, which turns the large pulse wave speed from a liability to a benefit, to solve the wave equation in each artery of the model arterial system. Our numerical studies show that our new algorithm is stable and can perform ∼15 times faster than the traditional implementation of the Lax-Wendroff method under the requirement that the relative numerical error of blood pressure be smaller than one percent, which is much smaller than the modeling error.

  16. Fast algorithms for glassy materials: methods and explorations

    NASA Astrophysics Data System (ADS)

    Middleton, A. Alan

    2014-03-01

    Glassy materials with frozen disorder, including random magnets such as spin glasses and interfaces in disordered materials, exhibit striking non-equilibrium behavior such as the ability to store a history of external parameters (memory). Precisely due to their glassy nature, direct simulation of models of these materials is very slow. In some fortunate cases, however, algorithms exist that exactly compute thermodynamic quantities. Such cases include spin glasses in two dimensions and interfaces and random field magnets in arbitrary dimensions at zero temperature. Using algorithms built using ideas developed by computer scientists and mathematicians, one can even directly sample equilibrium configurations in very large systems, as if one picked the configurations out of a ``hat'' of all configurations weighted by their Boltzmann factors. This talk will provide some of the background for these methods and discuss the connections between physics and computer science, as used by a number of groups. Recent applications of these methods to investigating phase transitions in glassy materials and to answering qualitative questions about the free energy landscape and memory effects will be discussed. This work was supported in part by NSF grant DMR-1006731. Creighton Thomas and David Huse also contributed to much of the work to be presented.

  17. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  18. Compiling fast partial derivatives of functions given by algorithms

    SciTech Connect

    Speelpenning, B.

    1980-01-01

    If the gradient of the function y = f(x/sub 1/,..., x/sub n/) is desired, where f is given by an algoritym Af(x, n, y), most numerical analysts will use numerical differencing. This is a sampling scheme that approximates derivatives by the slope of secants in closely spaced points. Symbolic methods that make full use of the program text of Af should be able to come up with a better way to evaluate the gradient of F. The system Jake described produces gradients significantly faster than numerical differencing. Jake can handle algorithms Af with arbitrary flow of control. Measurements performed on one particular machine suggest that Jake is faster than numerical differencing for n > 8. Somewhat weaker results were obtained for the problem of computing Jacobians of arbitrary shape.

  19. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  20. Fractional-Fourier-transform calculation through the fast-Fourier-transform algorithm

    NASA Astrophysics Data System (ADS)

    García, Javier; Mas, David; Dorsch, Rainer G.

    1996-12-01

    A method for the calculation of the fractional Fourier transform (FRT) by means of the fast Fourier transform (FFT) algorithm is presented. The process involves mainly two FFT s in cascade; thus the process has the same complexity as this algorithm. The method is valid for fractional orders varying from 1 to 1. Scaling factors for the FRT and Fresnel diffraction when calculated through the FFT are discussed.

  1. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.

  2. Review of alignment and SNP calling algorithms for next-generation sequencing data.

    PubMed

    Mielczarek, M; Szyda, J

    2016-02-01

    Application of the massive parallel sequencing technology has become one of the most important issues in life sciences. Therefore, it was crucial to develop bioinformatics tools for next-generation sequencing (NGS) data processing. Currently, two of the most significant tasks include alignment to a reference genome and detection of single nucleotide polymorphisms (SNPs). In many types of genomic analyses, great numbers of reads need to be mapped to the reference genome; therefore, selection of the aligner is an essential step in NGS pipelines. Two main algorithms-suffix tries and hash tables-have been introduced for this purpose. Suffix array-based aligners are memory-efficient and work faster than hash-based aligners, but they are less accurate. In contrast, hash table algorithms tend to be slower, but more sensitive. SNP and genotype callers may also be divided into two main different approaches: heuristic and probabilistic methods. A variety of software has been subsequently developed over the past several years. In this paper, we briefly review the current development of NGS data processing algorithms and present the available software.

  3. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  4. Comparing precorrected-FFT and fast multipole algorithms for solving three-dimensional potential integral equations

    SciTech Connect

    White, J.; Phillips, J.R.; Korsmeyer, T.

    1994-12-31

    Mixed first- and second-kind surface integral equations with (1/r) and {partial_derivative}/{partial_derivative} (1/r) kernels are generated by a variety of three-dimensional engineering problems. For such problems, Nystroem type algorithms can not be used directly, but an expansion for the unknown, rather than for the entire integrand, can be assumed and the product of the singular kernal and the unknown integrated analytically. Combining such an approach with a Galerkin or collocation scheme for computing the expansion coefficients is a general approach, but generates dense matrix problems. Recently developed fast algorithms for solving these dense matrix problems have been based on multipole-accelerated iterative methods, in which the fast multipole algorithm is used to rapidly compute the matrix-vector products in a Krylov-subspace based iterative method. Another approach to rapidly computing the dense matrix-vector products associated with discretized integral equations follows more along the lines of a multigrid algorithm, and involves projecting the surface unknowns onto a regular grid, then computing using the grid, and finally interpolating the results from the regular grid back to the surfaces. Here, the authors describe a precorrectted-FFT approach which can replace the fast multipole algorithm for accelerating the dense matrix-vector product associated with discretized potential integral equations. The precorrected-FFT method, described below, is an order n log(n) algorithm, and is asymptotically slower than the order n fast multipole algorithm. However, initial experimental results indicate the method may have a significant constant factor advantage for a variety of engineering problems.

  5. Preliminary versions of the MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.

  6. Globally convergent algorithms for estimating generalized gamma distributions in fast signal and image processing.

    PubMed

    Song, Kai-Sheng

    2008-08-01

    Many applications in real-time signal, image, and video processing require automatic algorithms for rapid characterizations of signals and images through fast estimation of their underlying statistical distributions. We present fast and globally convergent algorithms for estimating the three-parameter generalized gamma distribution (G Gamma D). The proposed method is based on novel scale-independent shape estimation (SISE) equations. We show that the SISE equations have a unique global root in their semi-infinite domains and the probability that the sample SISE equations have a unique global root tends to one. The consistency of the global root, its scale, and index shape estimators is obtained. Furthermore, we establish that, with probability tending to one, Newton-Raphson (NR) algorithms for solving the sample SISE equations converge globally to the unique root from any initial value in its given domain. In contrast to existing methods, another remarkable novelty is that the sample SISE equations are completely independent of gamma and polygamma functions and involve only elementary mathematical operations, making the algorithms well suited for real-time both hardware and software implementations. The SISE estimators also allow the maximum likelihood (ML) ratio procedure to be carried out for testing the generalized Gaussian distribution (GGD) versus the G Gamma D. Finally, the fast global convergence and accuracy of our algorithms for finite samples are demonstrated by both simulation studies and real image analysis.

  7. a Fast and Robust Algorithm for Road Edges Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Qiu, Kaijin; Sun, Kai; Ding, Kou; Shu, Zhen

    2016-06-01

    Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model) and DTM (digital terrain model) is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road). We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.

  8. A fast inter mode decision algorithm in H.264/AVC for IPTV broadcasting services

    NASA Astrophysics Data System (ADS)

    Kim, Geun-Yong; Yoon, Bin-Yeong; Ho, Yo-Sung

    2007-01-01

    The new video coding standard H.264/AVC employs the rate-distortion optimization (RDO) method for choosing the best coding mode. However, since it increases the encoder complexity tremendously, it is not suitable for real-time applications, such as IPTV broadcasting services. Therefore we need a fast mode decision algorithm to reduce its encoding time. In this paper, we propose a fast mode decision algorithm considering quantization parameter (QP) because we have noticed that the frequency of best modes depends on QP. In order to consider these characteristics, we use the coded block pattern (CBP) that has "0" value when all quantized discrete cosine transform (DCT) coefficients are zero. We also use both the early SKIP mode and early 16x16 mode decisions. Experimental results show that the proposed algorithm reduces the encoding time by 74.6% for the baseline profile and 72.8% for the main profile, compared to the H.264/AVC reference software.

  9. Fast and precise algorithms for calculating offset correction in single photon counting ASICs built in deep sub-micron technologies

    NASA Astrophysics Data System (ADS)

    Maj, P.

    2014-07-01

    An important trend in the design of readout electronics working in the single photon counting mode for hybrid pixel detectors is to minimize the single pixel area without sacrificing its functionality. This is the reason why many digital and analog blocks are made with the smallest, or next to smallest, transistors possible. This causes a problem with matching among the whole pixel matrix which is acceptable by designers and, of course, it should be corrected with the use of dedicated circuitry, which, by the same rule of minimizing devices, suffers from the mismatch. Therefore, the output of such a correction circuit, controlled by an ultra-small area DAC, is not only a non-linear function, but it is also often non-monotonic. As long as it can be used for proper correction of the DC operation points inside each pixel, it is acceptable, but the time required for correction plays an important role for both chip verification and the design of a big, multi-chip system. Therefore, we present two algorithms: a precise one and a fast one. The first algorithm is based on the noise hits profiles obtained during so called threshold scan procedures. The fast correction procedure is based on the trim DACs scan and it takes less than a minute in a SPC detector systems consisting of several thousands of pixels.

  10. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    PubMed

    Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei

    2016-02-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM".

  11. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    PubMed

    Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei

    2016-02-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM". PMID:26872036

  12. FctClus: A Fast Clustering Algorithm for Heterogeneous Information Networks.

    PubMed

    Yang, Jing; Chen, Limin; Zhang, Jianpei

    2015-01-01

    It is important to cluster heterogeneous information networks. A fast clustering algorithm based on an approximate commute time embedding for heterogeneous information networks with a star network schema is proposed in this paper by utilizing the sparsity of heterogeneous information networks. First, a heterogeneous information network is transformed into multiple compatible bipartite graphs from the compatible point of view. Second, the approximate commute time embedding of each bipartite graph is computed using random mapping and a linear time solver. All of the indicator subsets in each embedding simultaneously determine the target dataset. Finally, a general model is formulated by these indicator subsets, and a fast algorithm is derived by simultaneously clustering all of the indicator subsets using the sum of the weighted distances for all indicators for an identical target object. The proposed fast algorithm, FctClus, is shown to be efficient and generalizable and exhibits high clustering accuracy and fast computation speed based on a theoretic analysis and experimental verification. PMID:26090857

  13. A fast and accurate frequency estimation algorithm for sinusoidal signal with harmonic components

    NASA Astrophysics Data System (ADS)

    Hu, Jinghua; Pan, Mengchun; Zeng, Zhidun; Hu, Jiafei; Chen, Dixiang; Tian, Wugang; Zhao, Jianqiang; Du, Qingfa

    2016-10-01

    Frequency estimation is a fundamental problem in many applications, such as traditional vibration measurement, power system supervision, and microelectromechanical system sensors control. In this paper, a fast and accurate frequency estimation algorithm is proposed to deal with low efficiency problem in traditional methods. The proposed algorithm consists of coarse and fine frequency estimation steps, and we demonstrate that it is more efficient than conventional searching methods to achieve coarse frequency estimation (location peak of FFT amplitude) by applying modified zero-crossing technique. Thus, the proposed estimation algorithm requires less hardware and software sources and can achieve even higher efficiency when the experimental data increase. Experimental results with modulated magnetic signal show that the root mean square error of frequency estimation is below 0.032 Hz with the proposed algorithm, which has lower computational complexity and better global performance than conventional frequency estimation methods.

  14. Fast phase-added stereogram algorithm for generation of photorealistic 3D content.

    PubMed

    Kang, Hoonjong; Stoykova, Elena; Yoshikawa, Hiroshi

    2016-01-20

    A new phase-added stereogram algorithm for accelerated computation of holograms from a point cloud model is proposed. The algorithm relies on the hologram segmentation, sampling of directional information, and usage of the fast Fourier transform with a finer grid in the spatial frequency domain than is provided by the segment size. The algorithm gives improved quality of reconstruction due to new phase compensation introduced in the segment fringe patterns. The result is finer beam steering leading to high peak intensity and a large peak signal-to-noise ratio in reconstruction. The feasibility of the algorithm is checked by the generation of 3D contents for a color wavefront printer. PMID:26835945

  15. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    PubMed Central

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  16. A fast and high performance multiple data integration algorithm for identifying human disease genes

    PubMed Central

    2015-01-01

    Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620

  17. A fast parallel algorithm for determining all roots of a polynomial with real roots

    SciTech Connect

    Benor, M.; Feig, E.; Kozen, D.; Tiwari, P.

    1988-12-01

    Given a polynomial rho(z) of degree n with m bit integer coefficients and an integer ..mu.., the problem of determining all its roots with error less than 2/sup -..mu../ is considered. It is shown that this problem is in the class NC if rho(z) has all real roots. Some very interesting properties of a Sturm sequence of a polynomial with distinct real roots are proved and used in the design of a fast parallel algorithm for this problem. Using Newton identities and a novel numerical integration scheme for evaluating a contour integral to high precision, this algorithm determines good approximations to the linear factors of rho(z).

  18. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    SciTech Connect

    Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-09-15

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  19. Rolling element bearing fault diagnosis based on the combination of genetic algorithms and fast kurtogram

    NASA Astrophysics Data System (ADS)

    Zhang, Yongxiang; Randall, R. B.

    2009-07-01

    The rolling element bearing is a key part in many mechanical facilities and the diagnosis of its faults is very important in the field of predictive maintenance. Till date, the resonant demodulation technique (envelope analysis) has been widely exploited in practice. However, much practical diagnostic equipment for carrying out the analysis gives little flexibility to change the analysis parameters for different working conditions, such as variation in rotating speed and different fault types. Because the signals from a flawed bearing have features of non-stationarity, wide frequency range and weak strength, it can be very difficult to choose the best analysis parameters for diagnosis. However, the kurtosis of the vibration signals of a bearing is different from normal to bad condition, and is robust in varying conditions. The fast kurtogram gives rough analysis parameters very efficiently, but filter centre frequency and bandwidth cannot be chosen entirely independently. Genetic algorithms have a strong ability for optimization, but are slow unless initial parameters are close to optimal. Therefore, the authors present a model and algorithm to design the parameters for optimal resonance demodulation using the combination of fast kurtogram for initial estimates, and a genetic algorithm for final optimization. The feasibility and the effectiveness of the proposed method are demonstrated by experiment and give better results than the classical method of arbitrarily choosing a resonance to demodulate. The method gives more flexibility in choosing optimal parameters than the fast kurtogram alone.

  20. Fast inhomogeneous plane wave algorithm for the analysis of electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Hu, Bin; Chew, Weng Cho; Velamparambil, Sanjay

    2001-01-01

    The fast inhomogeneous plane wave algorithm has been developed to accelerate the solution of three-dimensional electromagnetic scattering problems in free space. By expanding the kernel of the Green's function using the Weyl identity and choosing a proper steepest descent path, the diagonalization of the translation matrix is achieved after the interpolation and extrapolation techniques are applied. The proposed algorithm is implemented on top of the scalable multipole engine, a portable implementation of the dynamic multilevel fast multipole algorithm for distributed-memory computers. The computational time per matrix vector multiplication is reduced to O(NlogN) and the memory requirement is reduced to O(N), where N is the number of unknowns in the discretized integral equation. The algorithm is validated by applying it to the solution of the electromagnetic scattering from the perfect electric conducting scatterers. This approach can be easily extended to more general problems with complicated Green's function expressed in terms of the plane wave spectral integrals, such as the ones encountered in the multilayered medium studies.

  1. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.

    PubMed

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-03-09

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms.

  2. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.

    PubMed

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-01-01

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms. PMID:27005632

  3. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors

    PubMed Central

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-01-01

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel’s type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms. PMID:27005632

  4. A fast approximate nearest neighbor search algorithm in the Hamming space.

    PubMed

    Esmaeili, Mani Malek; Ward, Rabab Kreidieh; Fatourechi, Mehrdad

    2012-12-01

    A fast approximate nearest neighbor search algorithm for the (binary) Hamming space is proposed. The proposed Error Weighted Hashing (EWH) algorithm is up to 20 times faster than the popular locality sensitive hashing (LSH) algorithm and works well even for large nearest neighbor distances where LSH fails. EWH significantly reduces the number of candidate nearest neighbors by weighing them based on the difference between their hash vectors. EWH can be used for multimedia retrieval and copy detection systems that are based on binary fingerprinting. On a fingerprint database with more than 1,000 videos, for a specific detection accuracy, we demonstrate that EWH is more than 10 times faster than LSH. For the same retrieval time, we show that EWH has a significantly better detection accuracy with a 15 times lower error rate.

  5. A fast, robust algorithm for power line interference cancellation in neural recording

    NASA Astrophysics Data System (ADS)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed

  6. Fast randomized Hough transformation track initiation algorithm based on multi-scale clustering

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Chen, Qian; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    A fast randomized Hough transformation track initiation algorithm based on multi-scale clustering is proposed to overcome existing problems in traditional infrared search and track system(IRST) which cannot provide movement information of the initial target and select the threshold value of correlation automatically by a two-dimensional track association algorithm based on bearing-only information . Movements of all the targets are presumed to be uniform rectilinear motion throughout this new algorithm. Concepts of space random sampling, parameter space dynamic linking table and convergent mapping of image to parameter space are developed on the basis of fast randomized Hough transformation. Considering the phenomenon of peak value clustering due to shortcomings of peak detection itself which is built on threshold value method, accuracy can only be ensured on condition that parameter space has an obvious peak value. A multi-scale idea is added to the above-mentioned algorithm. Firstly, a primary association is conducted to select several alternative tracks by a low-threshold .Then, alternative tracks are processed by multi-scale clustering methods , through which accurate numbers and parameters of tracks are figured out automatically by means of transforming scale parameters. The first three frames are processed by this algorithm in order to get the first three targets of the track , and then two slightly different gate radius are worked out , mean value of which is used to be the global threshold value of correlation. Moreover, a new model for curvilinear equation correction is applied to the above-mentioned track initiation algorithm for purpose of solving the problem of shape distortion when a space three-dimensional curve is mapped to a two-dimensional bearing-only space. Using sideways-flying, launch and landing as examples to build models and simulate, the application of the proposed approach in simulation proves its effectiveness , accuracy , and adaptivity

  7. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks

    PubMed Central

    Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L.; Sweet, Robert A.; Wang, Jieru; Chen, Wei

    2016-01-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer’s disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named “FastGGM”. PMID:26872036

  8. Fast NJ-like algorithms to deal with incomplete distance matrices

    PubMed Central

    Criscuolo, Alexis; Gascuel, Olivier

    2008-01-01

    Background Distance-based phylogeny inference methods first estimate evolutionary distances between every pair of taxa, then build a tree from the so-obtained distance matrix. These methods are fast and fairly accurate. However, they hardly deal with incomplete distance matrices. Such matrices are frequent with recent multi-gene studies, when two species do not share any gene in analyzed data. The few existing algorithms to infer trees with satisfying accuracy from incomplete distance matrices have time complexity in O(n4) or more, where n is the number of taxa, which precludes large scale studies. Agglomerative distance algorithms (e.g. NJ [1,2]) are much faster, with time complexity in O(n3) which allows huge datasets and heavy bootstrap analyses to be dealt with. These algorithms proceed in three steps: (a) search for the taxon pair to be agglomerated, (b) estimate the lengths of the two so-created branches, (c) reduce the distance matrix and return to (a) until the tree is fully resolved. But available agglomerative algorithms cannot deal with incomplete matrices. Results We propose an adaptation to incomplete matrices of three agglomerative algorithms, namely NJ, BIONJ [3] and MVR [4]. Our adaptation generalizes to incomplete matrices the taxon pair selection criterion of NJ (also used by BIONJ and MVR), and combines this generalized criterion with that of ADDTREE [5]. Steps (b) and (c) are also modified, but O(n3) time complexity is kept. The performance of these new algorithms is studied with large scale simulations, which mimic multi-gene phylogenomic datasets. Our new algorithms – named NJ*, BIONJ* and MVR* – infer phylogenetic trees that are as least as accurate as those inferred by other available methods, but with much faster running times. MVR* presents the best overall performance. This algorithm accounts for the variance of the pairwise evolutionary distance estimates, and is well suited for multi-gene studies where some distances are accurately

  9. Statistical iterative reconstruction using fast optimization transfer algorithm with successively increasing factor in Digital Breast Tomosynthesis

    NASA Astrophysics Data System (ADS)

    Xu, Shiyu; Zhang, Zhenxi; Chen, Ying

    2014-03-01

    Statistical iterative reconstruction exhibits particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description in transmission tomography system. However, to solve the objective function is computationally intensive compared to analytical reconstruction methods due to multiple iterations needed for convergence and each iteration involving forward/back-projections by using a complex geometric system model. Optimization transfer (OT) is a general algorithm converting a high dimensional optimization to a parallel 1-D update. OT-based algorithm provides a monotonic convergence and a parallel computing framework but slower convergence rate especially around the global optimal. Based on an indirect estimation on the spectrum of the OT convergence rate matrix, we proposed a successively increasing factor- scaled optimization transfer (OT) algorithm to seek an optimal step size for a faster rate. Compared to a representative OT based method such as separable parabolic surrogate with pre-computed curvature (PC-SPS), our algorithm provides comparable image quality (IQ) with fewer iterations. Each iteration retains a similar computational cost to PC-SPS. The initial experiment with a simulated Digital Breast Tomosynthesis (DBT) system shows that a total 40% computing time is saved by the proposed algorithm. In general, the successively increasing factor-scaled OT exhibits a tremendous potential to be a iterative method with a parallel computation, a monotonic and global convergence with fast rate.

  10. A fast rank-reduction algorithm for three-dimensional seismic data interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Yongna; Yu, Siwei; Liu, Lina; Ma, Jianwei

    2016-09-01

    Rank-reduction methods have been successfully used for seismic data interpolation and noise attenuation. However, highly intense computation is required for singular value decomposition (SVD) in most rank-reduction methods. In this paper, we propose a simple yet efficient interpolation algorithm, which is based on the Hankel matrix, for randomly missing traces. Following the multichannel singular spectrum analysis (MSSA) technique, we first transform the seismic data into a low-rank block Hankel matrix for each frequency slice. Then, a fast orthogonal rank-one matrix pursuit (OR1MP) algorithm is employed to minimize the low-rank constraint of the block Hankel matrix. In the new algorithm, only the left and right top singular vectors are needed to be computed, thereby, avoiding the complexity of computation required for SVD. Thus, we improve the calculation efficiency significantly. Finally, we anti-average the rank-reduction block Hankel matrix and obtain the reconstructed data in the frequency domain. Numerical experiments on 3D seismic data show that the proposed interpolation algorithm provides much better performance than the traditional MSSA algorithm in computational speed, especially for large-scale data processing.

  11. Peak detection in fiber Bragg grating using a fast phase correlation algorithm

    NASA Astrophysics Data System (ADS)

    Lamberti, A.; Vanlanduit, S.; De Pauw, B.; Berghmans, F.

    2014-05-01

    Fiber Bragg grating sensing principle is based on the exact tracking of the peak wavelength location. Several peak detection techniques have already been proposed in literature. Among these, conventional peak detection (CPD) methods such as the maximum detection algorithm (MDA), do not achieve very high precision and accuracy, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. On the other hand, recently proposed algorithms, like the cross-correlation demodulation algorithm (CCA), are more precise and accurate but require higher computational effort. To overcome these limitations, we developed a novel fast phase correlation algorithm (FPC) which performs as well as the CCA, being at the same time considerably faster. This paper presents the FPC technique and analyzes its performances for different SNR and wavelength resolutions. Using simulations and experiments, we compared the FPC with the MDA and CCA algorithms. The FPC detection capabilities were as precise and accurate as those of the CCA and considerably better than those of the CPD. The FPC computational time was up to 50 times lower than CCA, making the FPC a valid candidate for future implementation in real-time systems.

  12. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  13. Fast algorithm for scaling analysis with higher-order detrending moving average method

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken

    2016-05-01

    Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.

  14. Fast automated yeast cell counting algorithm using bright-field and fluorescence microscopic images

    PubMed Central

    2013-01-01

    Background The faithful determination of the concentration and viability of yeast cells is important for biological research as well as industry. To this end, it is important to develop an automated cell counting algorithm that can provide not only fast but also accurate and precise measurement of yeast cells. Results With the proposed method, we measured the precision of yeast cell measurements by using 0%, 25%, 50%, 75% and 100% viability samples. As a result, the actual viability measured with the proposed yeast cell counting algorithm is significantly correlated to the theoretical viability (R2 = 0.9991). Furthermore, we evaluated the performance of our algorithm in various computing platforms. The results showed that the proposed algorithm could be feasible to use with low-end computing platforms without loss of its performance. Conclusions Our yeast cell counting algorithm can rapidly provide the total number and the viability of yeast cells with exceptional accuracy and precision. Therefore, we believe that our method can become beneficial for a wide variety of academic field and industries such as biotechnology, pharmaceutical and alcohol production. PMID:24215650

  15. A fast and automatic fusion algorithm for unregistered multi-exposure image sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Yu, Feihong

    2014-09-01

    Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.

  16. Fast algorithm for solving the Hankel/Toeplitz Structured Total Least Squares problem

    NASA Astrophysics Data System (ADS)

    Lemmerling, Philippe; Mastronardi, Nicola; van Huffel, Sabine

    2000-07-01

    The Structured Total Least Squares (STLS) problem is a natural extension of the Total Least Squares (TLS) problem when constraints on the matrix structure need to be imposed. Similar to the ordinary TLS approach, the STLS approach can be used to determine the parameter vector of a linear model, given some noisy measurements. In many signal processing applications, the imposition of this matrix structure constraint is necessary for obtaining Maximum Likelihood (ML) estimates of the parameter vectorE In this paper we consider the Toeplitz (Hankel) STLS problem (i.e., an STLS problem in which the Toeplitz (Hankel) structure needs to be preserved). A fast implementation of an algorithm for solving this frequently occurring STLS problem is proposed. The increased efficiency is obtained by exploiting the low displacement rank of the involved matrices and the sparsity of the associated generators. The fast implementation is compared to two other implementations of algorithms for solving the Toeplitz (Hankel) STLS problem. The comparison is carried out on a recently proposed speech compression scheme. The numerical results confirm the high efficiency of the newly proposed fast implementation: the straightforward implementations have a complexity of O((m+n)3) and O(m3) whereas the proposed implementation has a complexity of O(mn+n2).

  17. A fast and precise indoor localization algorithm based on an online sequential extreme learning machine.

    PubMed

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-01

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics. PMID:25599427

  18. A Fast and Precise Indoor Localization Algorithm Based on an Online Sequential Extreme Learning Machine †

    PubMed Central

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-01

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics. PMID:25599427

  19. A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering

    PubMed Central

    Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani

    2012-01-01

    Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386

  20. Fast mode decision algorithm for scalable video coding based on luminance coded block pattern

    NASA Astrophysics Data System (ADS)

    Kim, Tae-Jung; Yoo, Jeong-Ju; Hong, Jin-Woo; Suh, Jae-Won

    2013-01-01

    A fast mode decision algorithm is proposed to reduce the computation complexity of adaptive inter layer prediction method, which is a motion estimation algorithm for video compression in scalable video coding (SVC) encoder systems. SVC is standard as an extension of H.264/AVC to provide multimedia services within variable transport environments and across various terminal systems. SVC supports an adaptive inter mode prediction, which includes not only the temporal prediction modes with varying block sizes but also inter layer prediction modes based on correlation between the lower layer information and the current layer. To achieve high coding efficiency, a rate distortion optimization technique is employed to select the best coding mode and reference frame for each MB. As a result, the performance gains of SVC come with increased computational complexity. To overcome this problem, we propose fast mode decision based on coded block pattern (CBP) of 16×16 mode and reference block of best CBP. The experimental results in SVC with combined scalability structure show that the proposed algorithm achieves up to an average 61.65% speed up factor in the encoding time with a negligible bit increment and a minimal image quality loss. In addition, experimental results in spatial and quality scalability show that the computational complexity has been reduced about 55.32% and 52.69%, respectively.

  1. Fast intra-prediction algorithms for high efficiency video coding standard

    NASA Astrophysics Data System (ADS)

    Kibeya, Hassan; Belghith, Fatma; Ben Ayed, Mohammed Ali; Masmoudi, Nouri

    2016-01-01

    High efficiency video coding (HEVC) is the latest video compression standard that provides significant performance improvement on the compression ratio compared to all existing video coding standards. The intra-prediction procedure plays an important role in the HEVC encoder, and it is being achieved by providing up to 35 intra-modes with a larger coding unit requiring a high computational complexity that needs to be alleviated. Toward this end, the paper proposes two fast intra-mode decision algorithms that exploit the features of video sequences. First, an early detection of zero transform and quantified coefficients method is applied to generate threshold values employed for early termination of the intra-decision process and hence accelerates the encoding procedure. Another fast intra-mode decision algorithm is elaborated that relies on a refinement technique. Based on statistical analyses of frequently chosen modes, only a small part of the candidate modes is chosen for intra-prediction process, which reduces the complexity of the intra-encoding procedure. The performance of the proposed algorithms is verified through comparative analysis of encoding time, visual image quality, and compression ratio. Compared to HM 10.0, the encoding time reduction can reach 69% with only a slight degradation of image quality and compression ratio.

  2. A simple, fast, and accurate algorithm to estimate large phylogenies by maximum likelihood.

    PubMed

    Guindon, Stéphane; Gascuel, Olivier

    2003-10-01

    The increase in the number of large data sets and the complexity of current probabilistic sequence evolution models necessitates fast and reliable phylogeny reconstruction methods. We describe a new approach, based on the maximum- likelihood principle, which clearly satisfies these requirements. The core of this method is a simple hill-climbing algorithm that adjusts tree topology and branch lengths simultaneously. This algorithm starts from an initial tree built by a fast distance-based method and modifies this tree to improve its likelihood at each iteration. Due to this simultaneous adjustment of the topology and branch lengths, only a few iterations are sufficient to reach an optimum. We used extensive and realistic computer simulations to show that the topological accuracy of this new method is at least as high as that of the existing maximum-likelihood programs and much higher than the performance of distance-based and parsimony approaches. The reduction of computing time is dramatic in comparison with other maximum-likelihood packages, while the likelihood maximization ability tends to be higher. For example, only 12 min were required on a standard personal computer to analyze a data set consisting of 500 rbcL sequences with 1,428 base pairs from plant plastids, thus reaching a speed of the same order as some popular distance-based and parsimony algorithms. This new method is implemented in the PHYML program, which is freely available on our web page: http://www.lirmm.fr/w3ifa/MAAS/.

  3. A fast and precise indoor localization algorithm based on an online sequential extreme learning machine.

    PubMed

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-15

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics.

  4. A fast general-purpose clustering algorithm based on FPGAs for high-throughput data processing

    NASA Astrophysics Data System (ADS)

    Annovi, A.; Beretta, M.

    2010-05-01

    We present a fast general-purpose algorithm for high-throughput clustering of data "with a two-dimensional organization". The algorithm is designed to be implemented with FPGAs or custom electronics. The key feature is a processing time that scales linearly with the amount of data to be processed. This means that clustering can be performed in pipeline with the readout, without suffering from combinatorial delays due to looping multiple times through all the data. This feature makes this algorithm especially well suited for problems where the data have high density, e.g. in the case of tracking devices working under high-luminosity condition such as those of LHC or super-LHC. The algorithm is organized in two steps: the first step (core) clusters the data; the second step analyzes each cluster of data to extract the desired information. The current algorithm is developed as a clustering device for modern high-energy physics pixel detectors. However, the algorithm has much broader field of applications. In fact, its core does not specifically rely on the kind of data or detector it is working for, while the second step can and should be tailored for a given application. For example, in case of spatial measurement with silicon pixel detectors, the second step performs center of charge calculation. Applications can thus be foreseen to other detectors and other scientific fields ranging from HEP calorimeters to medical imaging. An additional advantage of this two steps approach is that the typical clustering related calculations (second step) are separated from the combinatorial complications of clustering. This separation simplifies the design of the second step and it enables it to perform sophisticated calculations achieving offline quality in online applications. The algorithm is general purpose in the sense that only minimal assumptions on the kind of clustering to be performed are made.

  5. Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA).

    PubMed

    Muckley, Matthew J; Noll, Douglas C; Fessler, Jeffrey A

    2015-02-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms.

  6. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  7. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-07-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  8. A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations

    SciTech Connect

    Ouyang, G; Jandhyala, V; Champagne, N; Sharpe, R; Fasenfest, B J; Rockway, J D

    2004-12-14

    An Asymptotic Wave Expansion (AWE) technique is implemented into the EIGER computational electromagnetics code. The AWE fast frequency sweep is formed by separating the components of the integral equations by frequency dependence, then using this information to find a rational function approximation of the results. The standard AWE method is generalized to work for several integral equations, including the EFIE for conductors and the PMCHWT for dielectrics. The method is also expanded to work for two types of coupled circuit-EM problems as well as lumped load circuit elements. After a simple bisecting adaptive sweep algorithm is developed, dramatic speed improvements are seen for several example problems.

  9. Lazy skip-lists: An algorithm for fast hybridization-expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Sémon, P.; Yee, Chuck-Hou; Haule, Kristjan; Tremblay, A.-M. S.

    2014-08-01

    The solution of a generalized impurity model lies at the heart of electronic structure calculations with dynamical mean field theory. In the strongly correlated regime, the method of choice for solving the impurity model is the hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB). Enhancements to the CT-HYB algorithm are critical for bringing new physical regimes within reach of current computational power. Taking advantage of the fact that the bottleneck in the algorithm is a product of hundreds of matrices, we present optimizations based on the introduction and combination of two concepts of more general applicability: (a) skip lists and (b) fast rejection of proposed configurations based on matrix bounds. Considering two very different test cases with d electrons, we find speedups of ˜25 up to ˜500 compared to the direct evaluation of the matrix product. Even larger speedups are likely with f electron systems and with clusters of correlated atoms.

  10. Hessian Schatten-norm regularization for CBCT image reconstruction using fast iterative shrinkage-thresholding algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xinxin; Wang, Jiang; Tan, Shan

    2015-03-01

    Statistical iterative reconstruction in Cone-beam computed tomography (CBCT) uses prior knowledge to form different kinds of regularization terms. The total variation (TV) regularization has shown state-of-the-art performance in suppressing noises and preserving edges. However, it produces the well-known staircase effect. In this paper, a method that involves second-order differential operators was employed to avoid the staircase effect. The ability to avoid staircase effect lies in that higher-order derivatives can avoid over-sharpening the regions of smooth intensity transitions. Meanwhile, a fast iterative shrinkage-thresholding algorithm was used for the corresponding optimization problem. The proposed Hessian Schatten norm-based regularization keeps lots of favorable properties of TV, such as translation and scale invariant, with getting rid of the staircase effect that appears in TV-based reconstructions. The experiments demonstrated the outstanding ability of the proposed algorithm over TV method especially in suppressing the staircase effect.

  11. ADaM: augmenting existing approximate fast matching algorithms with efficient and exact range queries

    PubMed Central

    2014-01-01

    Background Drug discovery, disease detection, and personalized medicine are fast-growing areas of genomic research. With the advancement of next-generation sequencing techniques, researchers can obtain an abundance of data for many different biological assays in a short period of time. When this data is error-free, the result is a high-quality base-pair resolution picture of the genome. However, when the data is lossy the heuristic algorithms currently used when aligning next-generation sequences causes the corresponding accuracy to drop. Results This paper describes a program, ADaM (APF DNA Mapper) which significantly increases final alignment accuracy. ADaM works by first using an existing program to align "easy" sequences, and then using an algorithm with accuracy guarantees (the APF) to align the remaining sequences. The final result is a technique that increases the mapping accuracy from only 60% to over 90% for harder-to-align sequences. PMID:25079667

  12. Fast nearfield to farfield conversion algorithm for circular synthetic aperture sonar.

    PubMed

    Plotnick, Daniel S; Marston, Philip L; Marston, Timothy M

    2014-08-01

    Monostatic circular synthetic aperture sonar (CSAS) images are formed by processing azimuthal angle dependent backscattering from a target at a fixed distance from a collocated source/receiver. Typical CSAS imaging algorithms [Ferguson and Wyber, J. Acoust. Soc. Am. 117, 2915-2928 (2005)] assume scattering data are taken in the farfield. Experimental constraints may make farfield measurements impractical and thus require objects to be scanned in the nearfield. Left uncorrected this results in distortions of the target image and in the angular dependence of features. A fast approximate Hankel function based algorithm is presented to convert nearfield data to the farfield. Images and spectrograms of an extended target are compared for both cases.

  13. Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan

    1997-01-01

    A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.

  14. Fast parallel algorithms and enumeration techniques for partial k-trees

    SciTech Connect

    Narayanan, C.

    1989-01-01

    Recent research by several authors have resulted in systematic way of developing linear-time sequential algorithms for a host of problem: on a fairly general class of graphs variously known as bounded decomposable graphs, graphs of bounded treewidth, partial k-trees, etc. Partial k-trees arise in a variety of real-life applications such as network reliability, VLSI design and database systems and hence fast sequential algorithms on these graphs have been found to be desirable. The linear-time methodologies were independently developed by Bern, Lawler, and Wong ((10)), Arnborg and Proskurowski ((6)), Bodlaender ((14)), and Courcelle ((25)). Wimer ((89)) significantly extended the work of Bern, Lawler and Wong. All of these approaches share the common thread of using dynamic programming on a tree structure. In particular the methodology of Wimer uses a parse-tree as the data structure. The methodologies claim linear-time algorithms on partial k-trees for fixed k, for a number of combinatorial optimization problems given the tree structure as input. It is known that obtaining the tree structure is NP-hard. This dissertation investigates three important classes of problems: (1) Developing parallel algorithms for constructing a k-tree embedding, finding a tree decomposition and most notably obtaining a parse-tree for a partial k-tree. (2) Developing parallel algorithms for parse-tree computations, testing isomorphism of k-trees, and finding a 2-tree embedding of a cactus. (3) Obtaining techniques for counting vertex/edge subsets satisfying a certain property in some classes of partial k-trees. The parallel algorithms the author has developed are in class NC and are either new or improve upon the existing results of Bodlaender (13). The difference equations he has obtained for counting certain sub-graphs are not known in the literature so far.

  15. A Fast Cluster Motif Finding Algorithm for ChIP-Seq Data Sets

    PubMed Central

    Zhang, Yipu; Wang, Ping

    2015-01-01

    New high-throughput technique ChIP-seq, coupling chromatin immunoprecipitation experiment with high-throughput sequencing technologies, has extended the identification of binding locations of a transcription factor to the genome-wide regions. However, the most existing motif discovery algorithms are time-consuming and limited to identify binding motifs in ChIP-seq data which normally has the significant characteristics of large scale data. In order to improve the efficiency, we propose a fast cluster motif finding algorithm, named as FCmotif, to identify the (l,  d) motifs in large scale ChIP-seq data set. It is inspired by the emerging substrings mining strategy to find the enriched substrings and then searching the neighborhood instances to construct PWM and cluster motifs in different length. FCmotif is not following the OOPS model constraint and can find long motifs. The effectiveness of proposed algorithm has been proved by experiments on the ChIP-seq data sets from mouse ES cells. The whole detection of the real binding motifs and processing of the full size data of several megabytes finished in a few minutes. The experimental results show that FCmotif has advantageous to deal with the (l,  d) motif finding in the ChIP-seq data; meanwhile it also demonstrates better performance than other current widely-used algorithms such as MEME, Weeder, ChIPMunk, and DREME. PMID:26236718

  16. A Fast Cluster Motif Finding Algorithm for ChIP-Seq Data Sets.

    PubMed

    Zhang, Yipu; Wang, Ping

    2015-01-01

    New high-throughput technique ChIP-seq, coupling chromatin immunoprecipitation experiment with high-throughput sequencing technologies, has extended the identification of binding locations of a transcription factor to the genome-wide regions. However, the most existing motif discovery algorithms are time-consuming and limited to identify binding motifs in ChIP-seq data which normally has the significant characteristics of large scale data. In order to improve the efficiency, we propose a fast cluster motif finding algorithm, named as FCmotif, to identify the (l,  d) motifs in large scale ChIP-seq data set. It is inspired by the emerging substrings mining strategy to find the enriched substrings and then searching the neighborhood instances to construct PWM and cluster motifs in different length. FCmotif is not following the OOPS model constraint and can find long motifs. The effectiveness of proposed algorithm has been proved by experiments on the ChIP-seq data sets from mouse ES cells. The whole detection of the real binding motifs and processing of the full size data of several megabytes finished in a few minutes. The experimental results show that FCmotif has advantageous to deal with the (l,  d) motif finding in the ChIP-seq data; meanwhile it also demonstrates better performance than other current widely-used algorithms such as MEME, Weeder, ChIPMunk, and DREME. PMID:26236718

  17. Parallelization of an Adaptive Multigrid Algorithm for Fast Solution of Finite Element Structural Problems

    SciTech Connect

    Crane, N K; Parsons, I D; Hjelmstad, K D

    2002-03-21

    Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.

  18. A Fast Algorithm for Learning Overcomplete Dictionary for Sparse Representation Based on Proximal Operators.

    PubMed

    Li, Zhenni; Ding, Shuxue; Li, Yujie

    2015-09-01

    We present a fast, efficient algorithm for learning an overcomplete dictionary for sparse representation of signals. The whole problem is considered as a minimization of the approximation error function with a coherence penalty for the dictionary atoms and with the sparsity regularization of the coefficient matrix. Because the problem is nonconvex and nonsmooth, this minimization problem cannot be solved efficiently by an ordinary optimization method. We propose a decomposition scheme and an alternating optimization that can turn the problem into a set of minimizations of piecewise quadratic and univariate subproblems, each of which is a single variable vector problem, of either one dictionary atom or one coefficient vector. Although the subproblems are still nonsmooth, remarkably they become much simpler so that we can find a closed-form solution by introducing a proximal operator. This leads to an efficient algorithm for sparse representation. To our knowledge, applying the proximal operator to the problem with an incoherence term and obtaining the optimal dictionary atoms in closed form with a proximal operator technique have not previously been studied. The main advantages of the proposed algorithm are that, as suggested by our analysis and simulation study, it has lower computational complexity and a higher convergence rate than state-of-the-art algorithms. In addition, for real applications, it shows good performance and significant reductions in computational time.

  19. A novel small area fast block matching algorithm based on high-accuracy gyro in digital image stabilization

    NASA Astrophysics Data System (ADS)

    Wang, Peng; Zhao, Yuejin; Yu, Fei; Zhu, Weiwen; Lang, Guanqing; Dong, Liquan

    2010-11-01

    This paper presents a novel fast block matching algorithm based on high-accuracy Gyro for steadying shaking image. It acquires motion vector from Gyro firstly. Then determines searching initial position and divides image motion into three modes of small, medium and large using the motion vector from Gyro. Finally, fast block matching algorithm is designed by improving four types of templates (square, diamond, hexagon, octagon). Experimental result shows that the algorithm can speed up 50% over common method (such as NTSS, FSS, DS) and maintain the same accuracy.

  20. A fast rebinning algorithm for 3D positron emission tomography using John's equation

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Liu, Xuan

    1999-08-01

    Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.

  1. Applying the uniform resampling (URS) algorithm to a lissajous trajectory: fast image reconstruction with optimal gridding.

    PubMed

    Moriguchi, H; Wendt, M; Duerk, J L

    2000-11-01

    Various kinds of nonrectilinear Cartesian k-space trajectories have been studied, such as spiral, circular, and rosette trajectories. Although the nonrectilinear Cartesian sampling techniques generally have the advantage of fast data acquisition, the gridding process prior to 2D-FFT image reconstruction usually requires a number of additional calculations, thus necessitating an increase in the computation time. Further, the reconstructed image often exhibits artifacts resulting from both the k-space sampling pattern and the gridding procedure. To date, it has been demonstrated in only a few studies that the special geometric sampling patterns of certain specific trajectories facilitate fast image reconstruction. In other words, the inherent link among the trajectory, the sampling scheme, and the associated complexity of the regridding/reconstruction process has been investigated to only a limited extent. In this study, it is demonstrated that a Lissajous trajectory has the special geometric characteristics necessary for rapid reconstruction of nonrectilinear Cartesian k-space trajectories with constant sampling time intervals. Because of the applicability of a uniform resampling (URS) algorithm, a high-quality reconstructed image is obtained in a short reconstruction time when compared to other gridding algorithms. PMID:11064412

  2. Rapid multi-field T1 estimation algorithm for Fast Field-Cycling MRI

    NASA Astrophysics Data System (ADS)

    Broche, Lionel M.; James Ross, P.; Pine, Kerrin J.; Lurie, David J.

    2014-01-01

    Fast Field-Cycling MRI (FFC-MRI) is an emerging MRI technique that allows the main magnetic field to vary, allowing probing T1 at various magnetic field strengths. This technique offers promising possibilities but requires long scan times to improve the signal-to-noise ratio. This paper presents an algorithm derived from the two-point method proposed by Edelstein that can estimate T1 using only one image per field, thereby shortening the scan time by a factor of nearly two, taking advantage of the fact that the equilibrium magnetisation is proportional to the magnetic field strength. Therefore the equilibrium magnetisation only needs measuring once, then T1 can be found from inversion recovery experiments using the Bloch equations. The precision and accuracy of the algorithm are estimated using both simulated and experimental data, by Monte-Carlo simulations and by comparison with standard techniques on a phantom. The results are acceptable but usage is limited to the case where variations of the main magnetic field are fast compared with T1 and where the dispersion curve is relatively linear. The speed-up of T1-dispersion measurements resulting from the new method is likely to make FFC-MRI more acceptable when it is applied in the clinic.

  3. Optimal design of groundwater remediation systems using a multi-objective fast harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Qiankun; Wu, Jianfeng; Sun, Xiaomin; Yang, Yun; Wu, Jichun

    2012-12-01

    A new multi-objective optimization methodology is developed, whereby a multi-objective fast harmony search (MOFHS) is coupled with a groundwater flow and transport model to search for optimal design of groundwater remediation systems under general hydrogeological conditions. The MOFHS incorporates the niche technique into the previously improved fast harmony search and is enhanced by adding the Pareto solution set filter and an elite individual preservation strategy to guarantee uniformity and integrity of the Pareto front of multi-objective optimization problems. Also, the operation library of individual fitness is introduced to improve calculation speed. Moreover, the MOFHS is coupled with the commonly used flow and transport codes MODFLOW and MT3DMS, to search for optimal design of pump-and-treat systems, aiming at minimization of the remediation cost and minimization of the mass remaining in aquifers. Compared with three existing multi-objective optimization methods, including the improved niched Pareto genetic algorithm (INPGA), the non-dominated sorting genetic algorithm II (NSGAII), and the multi-objective harmony search (MOHS), the proposed methodology then demonstrated its applicability and efficiency through a two-dimensional hypothetical test problem and a three-dimensional field problem in Indiana (USA).

  4. Fast intersections on nested tetrahedrons (FINT): An algorithm for adaptive finite element based distributed parameter estimation.

    PubMed

    Lee, Jae Hoon; Joshi, Amit; Sevick-Muraca, Eva M

    2008-01-01

    A variety of biomedical imaging techniques such as optical and fluorescence tomography, electrical impedance tomography, and ultrasound imaging can be cast as inverse problems, wherein image reconstruction involves the estimation of spatially distributed parameter(s) of the PDE system describing the physics of the imaging process. Finite element discretization of imaged domain with tetrahedral elements is a popular way of solving the forward and inverse imaging problems on complicated geometries. A dual-adaptive mesh-based approach wherein, one mesh is used for solving the forward imaging problem and the other mesh used for iteratively estimating the unknown distributed parameter, can result in high resolution image reconstruction at minimum computation effort, if both the meshes are allowed to adapt independently. Till date, no efficient method has been reported to identify and resolve intersection between tetrahedrons in independently refined or coarsened dual meshes. Herein, we report a fast and robust algorithm to identify and resolve intersection of tetrahedrons within nested dual meshes generated by 8-similar subtetrahedron subdivision scheme. The algorithm exploits finite element weight functions and gives rise to a set of weight functions on each vertex of disjoint tetrahedron pieces that completely cover up the intersection region of two tetrahedrons. The procedure enables fully adaptive tetrahedral finite elements by supporting independent refinement and coarsening of each individual mesh while preserving fast identification and resolution of intersection. The computational efficiency of the algorithm is demonstrated by diffuse photon density wave solutions obtained from a single- and a dual-mesh, and by reconstructing a fluorescent inclusion in simulated phantom from boundary frequency domain fluorescence measurements.

  5. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    PubMed Central

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  6. A fast density-based clustering algorithm for real-time Internet of Things stream.

    PubMed

    Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.

  7. Improving the quantitative testing of fast aspherics surfaces with null screen using Dijkstra algorithm

    NASA Astrophysics Data System (ADS)

    Moreno Oliva, Víctor Iván; Castañeda Mendoza, Álvaro; Campos García, Manuel; Díaz Uribe, Rufino

    2011-09-01

    The null screen is a geometric method that allows the testing of fast aspherical surfaces, this method measured the local slope at the surface and by numerical integration the shape of the surface is measured. The usual technique for the numerical evaluation of the surface is the trapezoidal rule, is well-known fact that the truncation error increases with the second power of the spacing between spots of the integration path. Those paths are constructed following spots reflected on the surface and starting in an initial select spot. To reduce the numerical errors in this work we propose the use of the Dijkstra algorithm.1 This algorithm can find the shortest path from one spot (or vertex) to another spot in a weighted connex graph. Using a modification of the algorithm it is possible to find the minimal path from one select spot to all others ones. This automates and simplifies the integration process in the test with null screens. In this work is shown the efficient proposed evaluating a previously surface with a traditional process.

  8. A fast calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations

    NASA Astrophysics Data System (ADS)

    Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.

    2016-05-01

    Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.

  9. Fast hybrid CPU- and GPU-based CT reconstruction algorithm using air skipping technique.

    PubMed

    Lee, Byeonghun; Lee, Ho; Shin, Yeong Gil

    2010-01-01

    This paper presents a fast hybrid CPU- and GPU-based CT reconstruction algorithm to reduce the amount of back-projection operation using air skipping involving polygon clipping. The algorithm easily and rapidly selects air areas that have significantly higher contrast in each projection image by applying K-means clustering method on CPU, and then generates boundary tables for verifying valid region using segmented air areas. Based on these boundary tables of each projection image, clipped polygon that indicates active region when back-projection operation is performed on GPU is determined on each volume slice. This polygon clipping process makes it possible to use smaller number of voxels to be back-projected, which leads to a faster GPU-based reconstruction method. This approach has been applied to a clinical data set and Shepp-Logan phantom data sets having various ratio of air region for quantitative and qualitative comparison and analysis of our and conventional GPU-based reconstruction methods. The algorithm has been proved to reduce computational time to half without losing any diagnostic information, compared to conventional GPU-based approaches.

  10. A New Fast Algorithm to Completely Account for Non-Lambertian Surface Reflection of The Earth

    NASA Technical Reports Server (NTRS)

    Qin, Wen-Han; Herman, Jay R.; Ahmad, Ziauddin; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Surface bidirectional reflectance distribution function (BRDF) influences not only radiance just about the surface, but that emerging from the top of the atmosphere (TOA). In this study we propose a new, fast and accurate, algorithm CASBIR (correction for anisotropic surface bidirectional reflection) to account for such influences on radiance measured above TOA. This new algorithm is based on a 4-stream theory that separates the radiation field into direct and diffuse components in both upwelling and downwelling directions. This is important because the direct component accounts for a substantial portion of incident radiation under a clear sky, and the BRDF effect is strongest in the reflection of the direct radiation reaching the surface. The model is validated by comparison with a full-scale, vector radiation transfer model for the atmosphere-surface system. The result demonstrates that CASBIR performs very well (with overall relative difference of less than one percent) for all solar and viewing zenith and azimuth angles considered in wavelengths from ultraviolet to near-infrared over three typical, but very different surface types. Application of this algorithm includes both accounting for non-Lambertian surface scattering on the emergent radiation above TOA and a potential approach for surface BRDF retrieval from satellite measured radiance.

  11. A Fast Density-Based Clustering Algorithm for Real-Time Internet of Things Stream

    PubMed Central

    Ying Wah, Teh

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753

  12. Fast algorithm for minutiae matching based on multiple-ridge information

    NASA Astrophysics Data System (ADS)

    Wang, Guoyou; Hu, Jing

    2001-09-01

    Autonomous real-time fingerprint verification, how to judge whether two fingerprints come from the same finger or not, is an important and difficult problem in AFIS (Automated Fingerprint Identification system). In addition to the nonlinear deformation, two fingerprints from the same finger may also be dissimilar due to translation or rotation, all these factors do make the dissimilarities more great and lead to misjudgment, thus the correct verification rate highly depends on the deformation degree. In this paper, we present a new fast simple algorithm for fingerprint matching, derived from the Chang et al.'s method, to solve the problem of optimal matches between two fingerprints under nonlinear deformation. The proposed algorithm uses not only the feature points of fingerprints but also the multiple information of the ridge to reduce the computational complexity in fingerprint verification. Experiments with a number of fingerprint images have shown that this algorithm has higher efficiency than the existing of methods due to the reduced searching operations.

  13. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  14. Evaluation of algorithms for microperfusion assessment by fast simulations of laser Doppler power spectral density.

    PubMed

    Wojtkiewicz, S; Liebert, A; Rix, H; Maniewski, R

    2011-12-21

    In classical laser Doppler (LD) perfusion measurements, zeroth- and first-order moments of the power spectral density of the LD signal are utilized for the calculation of a signal corresponding to the concentration, speed and flow of red blood cells (RBCs). We have analysed the nonlinearities of the moments in relation to RBC speed distributions, parameters of filters utilized in LD instruments and the signal-to-noise ratio. We have developed a new method for fast simulation of the spectrum of the LD signal. The method is based on a superposition of analytically calculated Doppler shift probability distributions derived for the assumed light scattering phase function. We have validated the method by a comparison of the analytically calculated spectra with results of Monte Carlo (MC) simulations. For the semi-infinite, homogeneous medium and the single Doppler scattering regime, the analytical calculation describes LD spectra with the same accuracy as the MC simulation. The method allows for simulating the LD signal in time domain and furthermore analysing the index of perfusion for the assumed wavelength of the light, optical properties of the tissue and concentration of RBCs. Fast simulations of the LD signal in time domain and its frequency spectrum can be utilized in applications where knowledge of the LD photocurrent is required, e.g. in the development of detectors for tissue microperfusion monitoring or in measurements of the LD autocorrelation function for perfusion measurements. The presented fast method for LD spectra calculation can be used as a tool for evaluation of signal processing algorithms used in the LD method and/or for the development of new algorithms of the LD flowmetry and imaging. We analysed LD spectra obtained by analytical calculations using a classical algorithm applied in classical LD perfusion measurements. We observed nonlinearity of the first moment M₁ for low and high speeds of particles (v < 2 mm s⁻¹, v > 10 mm s⁻¹). It was

  15. Evaluation of algorithms for microperfusion assessment by fast simulations of laser Doppler power spectral density

    NASA Astrophysics Data System (ADS)

    Wojtkiewicz, S.; Liebert, A.; Rix, H.; Maniewski, R.

    2011-12-01

    In classical laser Doppler (LD) perfusion measurements, zeroth- and first-order moments of the power spectral density of the LD signal are utilized for the calculation of a signal corresponding to the concentration, speed and flow of red blood cells (RBCs). We have analysed the nonlinearities of the moments in relation to RBC speed distributions, parameters of filters utilized in LD instruments and the signal-to-noise ratio. We have developed a new method for fast simulation of the spectrum of the LD signal. The method is based on a superposition of analytically calculated Doppler shift probability distributions derived for the assumed light scattering phase function. We have validated the method by a comparison of the analytically calculated spectra with results of Monte Carlo (MC) simulations. For the semi-infinite, homogeneous medium and the single Doppler scattering regime, the analytical calculation describes LD spectra with the same accuracy as the MC simulation. The method allows for simulating the LD signal in time domain and furthermore analysing the index of perfusion for the assumed wavelength of the light, optical properties of the tissue and concentration of RBCs. Fast simulations of the LD signal in time domain and its frequency spectrum can be utilized in applications where knowledge of the LD photocurrent is required, e.g. in the development of detectors for tissue microperfusion monitoring or in measurements of the LD autocorrelation function for perfusion measurements. The presented fast method for LD spectra calculation can be used as a tool for evaluation of signal processing algorithms used in the LD method and/or for the development of new algorithms of the LD flowmetry and imaging. We analysed LD spectra obtained by analytical calculations using a classical algorithm applied in classical LD perfusion measurements. We observed nonlinearity of the first moment M1 for low and high speeds of particles (v < 2 mm s-1, v > 10 mm s-1). It was also

  16. Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader

    2004-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing

  17. Program for the analysis of time series. [by means of fast Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Brown, T. J.; Brown, C. G.; Hardin, J. C.

    1974-01-01

    A digital computer program for the Fourier analysis of discrete time data is described. The program was designed to handle multiple channels of digitized data on general purpose computer systems. It is written, primarily, in a version of FORTRAN 2 currently in use on CDC 6000 series computers. Some small portions are written in CDC COMPASS, an assembler level code. However, functional descriptions of these portions are provided so that the program may be adapted for use on any facility possessing a FORTRAN compiler and random-access capability. Properly formatted digital data are windowed and analyzed by means of a fast Fourier transform algorithm to generate the following functions: (1) auto and/or cross power spectra, (2) autocorrelations and/or cross correlations, (3) Fourier coefficients, (4) coherence functions, (5) transfer functions, and (6) histograms.

  18. Fast String Search on Multicore Processors: Mapping fundamental algorithms onto parallel hardware

    SciTech Connect

    Scarpazza, Daniele P.; Villa, Oreste; Petrini, Fabrizio

    2008-04-01

    String searching is one of these basic algorithms. It has a host of applications, including search engines, network intrusion detection, virus scanners, spam filters, and DNA analysis, among others. The Cell processor, with its multiple cores, promises to speed-up string searching a lot. In this article, we show how we mapped string searching efficiently on the Cell. We present two implementations: • The fast implementation supports a small dictionary size (approximately 100 patterns) and provides a throughput of 40 Gbps, which is 100 times faster than reference implementations on x86 architectures. • The heavy-duty implementation is slower (3.3-4.3 Gbps), but supports dictionaries with tens of thousands of strings.

  19. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  20. A segmentation algorithm for automated tracking of fast swimming unlabelled cells in three dimensions.

    PubMed

    Pimentel, J A; Carneiro, J; Darszon, A; Corkidi, G

    2012-01-01

    Recent advances in microscopy and cytolabelling methods enable the real time imaging of cells as they move and interact in their real physiological environment. Scenarios in which multiple cells move autonomously in all directions are not uncommon in biology. A remarkable example is the swimming of marine spermatozoa in search of the conspecific oocyte. Imaging cells in these scenarios, particularly when they move fast and are poorly labelled or even unlabelled requires very fast three-dimensional time-lapse (3D+t) imaging. This 3D+t imaging poses challenges not only to the acquisition systems but also to the image analysis algorithms. It is in this context that this work describes an original automated multiparticle segmentation method to analyse motile translucent cells in 3D microscopical volumes. The proposed segmentation technique takes advantage of the way the cell appearance changes with the distance to the focal plane position. The cells translucent properties and their interaction with light produce a specific pattern: when the cell is within or close to the focal plane, its two-dimensional (2D) appearance matches a bright spot surrounded by a dark ring, whereas when it is farther from the focal plane the cell contrast is inverted looking like a dark spot surrounded by a bright ring. The proposed method analyses the acquired video sequence frame-by-frame taking advantage of 2D image segmentation algorithms to identify and select candidate cellular sections. The crux of the method is in the sequential filtering of the candidate sections, first by template matching of the in-focus and out-of-focus templates and second by considering adjacent candidates sections in 3D. These sequential filters effectively narrow down the number of segmented candidate sections making the automatic tracking of cells in three dimensions a straightforward operation. PMID:21999166

  1. A segmentation algorithm for automated tracking of fast swimming unlabelled cells in three dimensions.

    PubMed

    Pimentel, J A; Carneiro, J; Darszon, A; Corkidi, G

    2012-01-01

    Recent advances in microscopy and cytolabelling methods enable the real time imaging of cells as they move and interact in their real physiological environment. Scenarios in which multiple cells move autonomously in all directions are not uncommon in biology. A remarkable example is the swimming of marine spermatozoa in search of the conspecific oocyte. Imaging cells in these scenarios, particularly when they move fast and are poorly labelled or even unlabelled requires very fast three-dimensional time-lapse (3D+t) imaging. This 3D+t imaging poses challenges not only to the acquisition systems but also to the image analysis algorithms. It is in this context that this work describes an original automated multiparticle segmentation method to analyse motile translucent cells in 3D microscopical volumes. The proposed segmentation technique takes advantage of the way the cell appearance changes with the distance to the focal plane position. The cells translucent properties and their interaction with light produce a specific pattern: when the cell is within or close to the focal plane, its two-dimensional (2D) appearance matches a bright spot surrounded by a dark ring, whereas when it is farther from the focal plane the cell contrast is inverted looking like a dark spot surrounded by a bright ring. The proposed method analyses the acquired video sequence frame-by-frame taking advantage of 2D image segmentation algorithms to identify and select candidate cellular sections. The crux of the method is in the sequential filtering of the candidate sections, first by template matching of the in-focus and out-of-focus templates and second by considering adjacent candidates sections in 3D. These sequential filters effectively narrow down the number of segmented candidate sections making the automatic tracking of cells in three dimensions a straightforward operation.

  2. A multi-threaded mosaicking algorithm for fast image composition of fluorescence bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Bommes, Michael; Stehle, Thomas; Gross, Sebastian; Leonhardt, Steffen; Aach, Til

    2010-02-01

    The treatment of urinary bladder cancer is usually carried out using fluorescence endoscopy. A narrow-band bluish illumination activates a tumor marker resulting in a red fluorescence. Because of low illumination power the distance between endoscope and bladder wall is kept low during the whole bladder scan, which is carried out before treatment. Thus, only a small field of view (FOV) of the operation field is provided, which impedes navigation and relocating of multi-focal tumors. Although off-line calculated panorama images can assist surgery planning, the immediate display of successively growing overview images composed from single video frames in real-time during the bladder scan, is well suited to ease navigation and reduce the risk of missing tumors. Therefore we developed an image mosaicking algorithm for fluorescence endoscopy. Due to fast computation requirements a flexible multi-threaded software architecture based on our RealTimeFrame platform is developed. Different algorithm tasks, like image feature extraction, matching and stitching are separated and applied by independent processing threads. Thus, different implementation of single tasks can be easily evaluated. In an optimization step we evaluate the trade-off between feature repeatability and total processing time, consider the thread synchronization, and achieve a constant workload of each thread. Thus, a fast computation of panoramic images is performed on a standard hardware platform, preserving full input image resolution (780x576) at the same time. Displayed on a second clinical monitor, the extended FOV of the image composition promises high potential for surgery assistance.

  3. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  4. Spatially reduced image extraction from MPEG-2 video: fast algorithms and applications

    NASA Astrophysics Data System (ADS)

    Song, Junehwa; Yeo, Boon-Lock

    1997-12-01

    The MPEG-2 video standards are targeted for high-quality video broadcast and distribution, and are optimized for efficient storage and transmission. However, it is difficult to process MPEG-2 for video browsing and database applications without first decompressing the video. Yeo and Liu have proposed fast algorithms for the direct extraction of spatially reduced images from MPEG-1 video. Reduced images have been demonstrated to be effective for shot detection, shot browsing and editing, and temporal processing of video for video presentation and content annotation. In this paper, we develop new tools to handle the extra complexity in MPEG-2 video for extracting spatially reduced images. In particular, we propose new classes of discrete cosine transform (DCT) domain and DCT inverse motion compensation operations for handling the interlaced modes in the different frame types of MPEG-2, and design new and efficient algorithms for generating spatially reduced images of an MPEG-2 video. We also describe key video applications on the extracted reduced images.

  5. Fast Fourier transformation resampling algorithm and its application in satellite image processing

    NASA Astrophysics Data System (ADS)

    Li, Zhenping

    2014-01-01

    The image resampling algorithm, fast Fourier transformation resampling (FFTR), is introduced. The FFTR uses a global function in the Fourier expansion form to represent an image, and the image resampling is achieved by the introduction of a phase shift in the Fourier expansion. The comparison with the cubic spline interpolation approach in the image resampling is presented, which shows that FFTR is more accurate in the satellite image resampling. The FFTR algorithm is also generally reversible, because both the resampled and its original images share the same Fourier spectrum. The resampling for the images with hot spots is discussed. The hot spots in an image are the pixels with the second-order derivatives that are order of magnitude larger than the average value. The images with the hot spots are resampled with the introduction of a local Gaussian function to model the hot spot data, so that the remaining data for the Fourier expansion are continuous. Its application to the infrared channel image of Geostationary Operational Environmental Satellite Imager, to mitigate a diurnally changing band co-registration, is presented.

  6. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Chen, Jian-Lin; Li, Lei; Wang, Lin-Yuan; Cai, Ai-Long; Xi, Xiao-Qi; Zhang, Han-Ming; Li, Jian-Xin; Yan, Bin

    2015-02-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).

  7. A fast loop-closure algorithm to accelerate residue matching in computational enzyme design.

    PubMed

    Xue, Jing; Huang, Xiaoqiang; Lin, Min; Zhu, Yushan

    2016-02-01

    Constructing an active site on an inert scaffold is still a challenge in chemical biology. Herein, we describe the incorporation of a Newton-direction-based fast loop-closure algorithm for catalytic residue matching into our enzyme design program ProdaMatch. This was developed to determine the sites and geometries of the catalytic residues as well as the position of the transition state with high accuracy in order to satisfy the geometric constraints on the interactions between catalytic residues and the transition state. Loop-closure results for 64,827 initial loops derived from 21 loops in the test set showed that 99.51% of the initial loops closed to within 0.05 Å in fewer than 400 iteration steps, while the large majority of the initial loops closed within 100 iteration steps. The revised version of ProdaMatch containing the novel loop-closure algorithm identified all native matches for ten scaffolds in the native active-site recapitulation test. Its high speed and accuracy when matching catalytic residues with a scaffold make this version of ProdaMatch potentially useful for scaffold selection through the incorporation of more complex theoretical enzyme models which may yield higher initial activities in de novo enzyme design.

  8. A Fast Full Tensor Gravity computation algorithm for High Resolution 3D Geologic Interpretations

    NASA Astrophysics Data System (ADS)

    Jayaram, V.; Crain, K.; Keller, G. R.

    2011-12-01

    approach on different CPU-GPU system configurations. The algorithm calculates the expected gravity at station locations where the observed gravity and FTG data were acquired. This algorithm can be used for all fast forward model calculations of 3D geologic interpretations for data from airborne, space and submarine gravity, and FTG instrumentation.

  9. A fast algorithm for voxel-based deterministic simulation of X-ray imaging

    NASA Astrophysics Data System (ADS)

    Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2008-04-01

    Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video

  10. Fast multi-scale edge detection algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zang, Jie; Song, Yanjun; Li, Shaojuan; Luo, Guoyun

    2011-11-01

    The traditional edge detection algorithms have certain noise amplificat ion, making there is a big error, so the edge detection ability is limited. In analysis of the low-frequency signal of image, wavelet analysis theory can reduce the time resolution; under high time resolution for high-frequency signal of the image, it can be concerned about the transient characteristics of the signal to reduce the frequency resolution. Because of the self-adaptive for signal, the wavelet transform can ext ract useful informat ion from the edge of an image. The wavelet transform is at various scales, wavelet transform of each scale provides certain edge informat ion, so called mult i-scale edge detection. Multi-scale edge detection is that the original signal is first polished at different scales, and then detects the mutation of the original signal by the first or second derivative of the polished signal, and the mutations are edges. The edge detection is equivalent to signal detection in different frequency bands after wavelet decomposition. This article is use of this algorithm which takes into account both details and profile of image to detect the mutation of the signal at different scales, provided necessary edge information for image analysis, target recognition and machine visual, and achieved good results.

  11. Fast Quantitative Analysis Of Museum Objects Using Laser-Induced Breakdown Spectroscopy And Multiple Regression Algorithms

    NASA Astrophysics Data System (ADS)

    Lorenzetti, G.; Foresta, A.; Palleschi, V.; Legnaioli, S.

    2009-09-01

    The recent development of mobile instrumentation, specifically devoted to in situ analysis and study of museum objects, allows the acquisition of many LIBS spectra in very short time. However, such large amount of data calls for new analytical approaches which would guarantee a prompt analysis of the results obtained. In this communication, we will present and discuss the advantages of statistical analytical methods, such as Partial Least Squares Multiple Regression algorithms vs. the classical calibration curve approach. PLS algorithms allows to obtain in real time the information on the composition of the objects under study; this feature of the method, compared to the traditional off-line analysis of the data, is extremely useful for the optimization of the measurement times and number of points associated with the analysis. In fact, the real time availability of the compositional information gives the possibility of concentrating the attention on the most `interesting' parts of the object, without over-sampling the zones which would not provide useful information for the scholars or the conservators. Some example on the applications of this method will be presented, including the studies recently performed by the researcher of the Applied Laser Spectroscopy Laboratory on museum bronze objects.

  12. A fast algorithm for the recursive calculation of dominant singular subspaces

    NASA Astrophysics Data System (ADS)

    Mastronardi, N.; van Barel, M.; Vandebril, R.

    2008-09-01

    In many engineering applications it is required to compute the dominant subspace of a matrix A of dimension m×n, with m[not double greater-than sign]n. Often the matrix A is produced incrementally, so all the columns are not available simultaneously. This problem arises, e.g., in image processing, where each column of the matrix A represents an image of a given sequence leading to a singular value decomposition-based compression [S. Chandrasekaran, B.S. Manjunath, Y.F. Wang, J. Winkeler, H. Zhang, An eigenspace update algorithm for image analysis, Graphical Models and Image Process. 59 (5) (1997) 321-332]. Furthermore, the so-called proper orthogonal decomposition approximation uses the left dominant subspace of a matrix A where a column consists of a time instance of the solution of an evolution equation, e.g., the flow field from a fluid dynamics simulation. Since these flow fields tend to be very large, only a small number can be stored efficiently during the simulation, and therefore an incremental approach is useful [P. Van Dooren, Gramian based model reduction of large-scale dynamical systems, in: Numerical Analysis 1999, Chapman & Hall, CRC Press, London, Boca Raton, FL, 2000, pp. 231-247]. In this paper an algorithm for computing an approximation of the left dominant subspace of size k of , with k[double less-than sign]m,n, is proposed requiring at each iteration O(mk+k2) floating point operations. Moreover, the proposed algorithm exhibits a lot of parallelism that can be exploited for a suitable implementation on a parallel computer.

  13. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  14. Research on fast Fourier transforms algorithm of huge remote sensing image technology with GPU and partitioning technology.

    PubMed

    Yang, Xue; Li, Xue-You; Li, Jia-Guo; Ma, Jun; Zhang, Li; Yang, Jan; Du, Quan-Ye

    2014-02-01

    Fast Fourier transforms (FFT) is a basic approach to remote sensing image processing. With the improvement of capacity of remote sensing image capture with the features of hyperspectrum, high spatial resolution and high temporal resolution, how to use FFT technology to efficiently process huge remote sensing image becomes the critical step and research hot spot of current image processing technology. FFT algorithm, one of the basic algorithms of image processing, can be used for stripe noise removal, image compression, image registration, etc. in processing remote sensing image. CUFFT function library is the FFT algorithm library based on CPU and FFTW. FFTW is a FFT algorithm developed based on CPU in PC platform, and is currently the fastest CPU based FFT algorithm function library. However there is a common problem that once the available memory or memory is less than the capacity of image, there will be out of memory or memory overflow when using the above two methods to realize image FFT arithmetic. To address this problem, a CPU and partitioning technology based Huge Remote Fast Fourier Transform (HRFFT) algorithm is proposed in this paper. By improving the FFT algorithm in CUFFT function library, the problem of out of memory and memory overflow is solved. Moreover, this method is proved rational by experiment combined with the CCD image of HJ-1A satellite. When applied to practical image processing, it improves effect of the image processing, speeds up the processing, which saves the time of computation and achieves sound result.

  15. An accelerated photo-magnetic imaging reconstruction algorithm based on an analytical forward solution and a fast Jacobian assembly method

    NASA Astrophysics Data System (ADS)

    Nouizi, F.; Erkol, H.; Luk, A.; Marks, M.; Unlu, M. B.; Gulsen, G.

    2016-10-01

    We previously introduced photo-magnetic imaging (PMI), an imaging technique that illuminates the medium under investigation with near-infrared light and measures the induced temperature increase using magnetic resonance thermometry (MRT). Using a multiphysics solver combining photon migration and heat diffusion, PMI models the spatiotemporal distribution of temperature variation and recovers high resolution optical absorption images using these temperature maps. In this paper, we present a new fast non-iterative reconstruction algorithm for PMI. This new algorithm uses analytic methods during the resolution of the forward problem and the assembly of the sensitivity matrix. We validate our new analytic-based algorithm with the first generation finite element method (FEM) based reconstruction algorithm previously developed by our team. The validation is performed using, first synthetic data and afterwards, real MRT measured temperature maps. Our new method accelerates the reconstruction process 30-fold when compared to a single iteration of the FEM-based algorithm.

  16. The 183-WSL fast rain rate retrieval algorithm: Part I: Retrieval design

    NASA Astrophysics Data System (ADS)

    Laviola, Sante; Levizzani, Vincenzo

    2011-03-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) fast retrieval method retrieves rain rates and classifies precipitation types for applications in nowcasting and weather monitoring. The retrieval scheme consists of two fast algorithms, over land and over ocean, that use the water vapour absorption lines at 183.31 GHz corresponding to the channels 3 (183.31 ± 1 GHz), 4 (183.31 ± 3 GHz) and 5 (183.31 ± 7 GHz) of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and Metop-A satellite series, respectively. The method retrieves rain rates by exploiting the extinction of radiation due to rain drops following four subsequent steps. After ingesting the satellite data stream, the window channels at 89 and 150 GHz are used to compute scattering-based thresholds and the 183-WSLW module for rainfall area discrimination and precipitation type classification as stratiform or convective on the basis of the thresholds calculated for land/mixed and sea surfaces. The thresholds are based on the brightness temperature difference Δwin = TB89 - TB150 and are different over land (L) and over sea (S): cloud droplets and water vapour (Δwin < 3 K L; Δwin < 0 K S), stratiform rain (3 K < Δwin < 10 K L; 0 K < Δwin < 10 K S), and convective rain (Δwin > 10 K L and S). The thresholds, initially empirically derived from observations, are corroborated by the simulations of the RTTOV radiative transfer model applied to 20000 ECMWF atmospheric profiles at midlatitudes and the use of data from the Nimrod radar network. A snow cover mask and a digital elevation model are used to eliminate false rain area attribution, especially over elevated terrain. A probability of detection logistic function is also applied in the transition region from no-rain to rain adjacent to the clouds to ensure continuity of the rainfall field. Finally, the last step is dedicated to the rain rate retrieval with the modules 183-WSLS (stratiform

  17. A fast video clip retrieval algorithm based on VA-file

    NASA Astrophysics Data System (ADS)

    Liu, Fangjie; Dong, DaoGuo; Miao, Xiaoping; Xue, XiangYang

    2003-12-01

    Video clip retrieval is a significant research topic of content-base multimedia retrieval. Generally, video clip retrieval process is carried out as following: (1) segment a video clip into shots; (2) extract a key frame from each shot as its representative; (3) denote every key frame as a feature vector, and thus a video clip can be denoted as a sequence of feature vectors; (4) retrieve match clip by computing the similarity between the feature vector sequence of a query clip and the feature vector sequence of any clip in database. To carry out fast video clip retrieval the index structure is indispensable. According to our literature survey, S2-tree [17] is the one and only index structure having been applied to support video clip retrieval, which combines the characteristics of both X-tree and Suffix-tree and converts the series vectors retrieval to string matching. But S2-tree structure will not be applicable if the feature vector's dimension is beyond 20, because the X-tree itself cannot be used to sustain similarity query effectively when dimensions of vectors are beyond 20. Furthermore, it cannot support flexible similarity definitions between two vector sequences. VA-file represents the vector approximately by compressing the original data and it maintains the original order when representing vectors in a sequence, which is a very valuable merit for vector sequences matching. In this paper, a new video clip similarity model as well as video clip retrieval algorithm based on VA-File are proposed. The experiments show that our algorithm incredibly shortened the retrieval time compared to sequential scanning without index structure.

  18. An algorithm for computing the 2D structure of fast rotating stars

    NASA Astrophysics Data System (ADS)

    Rieutord, Michel; Espinosa Lara, Francisco; Putigny, Bertrand

    2016-08-01

    Stars may be understood as self-gravitating masses of a compressible fluid whose radiative cooling is compensated by nuclear reactions or gravitational contraction. The understanding of their time evolution requires the use of detailed models that account for a complex microphysics including that of opacities, equation of state and nuclear reactions. The present stellar models are essentially one-dimensional, namely spherically symmetric. However, the interpretation of recent data like the surface abundances of elements or the distribution of internal rotation have reached the limits of validity of one-dimensional models because of their very simplified representation of large-scale fluid flows. In this article, we describe the ESTER code, which is the first code able to compute in a consistent way a two-dimensional model of a fast rotating star including its large-scale flows. Compared to classical 1D stellar evolution codes, many numerical innovations have been introduced to deal with this complex problem. First, the spectral discretization based on spherical harmonics and Chebyshev polynomials is used to represent the 2D axisymmetric fields. A nonlinear mapping maps the spheroidal star and allows a smooth spectral representation of the fields. The properties of Picard and Newton iterations for solving the nonlinear partial differential equations of the problem are discussed. It turns out that the Picard scheme is efficient on the computation of the simple polytropic stars, but Newton algorithm is unsurpassed when stellar models include complex microphysics. Finally, we discuss the numerical efficiency of our solver of Newton iterations. This linear solver combines the iterative Conjugate Gradient Squared algorithm together with an LU-factorization serving as a preconditioner of the Jacobian matrix.

  19. A Fast, Locally Adaptive, Interactive Retrieval Algorithm for the Analysis of DIAL Measurements

    NASA Astrophysics Data System (ADS)

    Samarov, D. V.; Rogers, R.; Hair, J. W.; Douglass, K. O.; Plusquellic, D.

    2010-12-01

    Differential absorption light detection and ranging (DIAL) is a laser-based tool which is used for remote, range-resolved measurement of particular gases in the atmosphere, such as carbon-dioxide and methane. In many instances it is of interest to study how these gases are distributed over a region such as a landfill, factory, or farm. While a single DIAL measurement only tells us about the distribution of a gas along a single path, a sequence of consecutive measurements provides us with information on how that gas is distributed over a region, making DIAL a natural choice for such studies. DIAL measurements present a number of interesting challenges; first, in order to convert the raw data to concentration it is necessary to estimate the derivative along the path of the measurement. Second, as the distribution of gases across a region can be highly heterogeneous it is important that the spatial nature of the measurements be taken into account. Finally, since it is common for the set of collected measurements to be quite large it is important for the method to be computationally efficient. Existing work based on Local Polynomial Regression (LPR) has been developed which addresses the first two issues, but the issue of computational speed remains an open problem. In addition to the latter, another desirable property is to allow user input into the algorithm. In this talk we present a novel method based on LPR which utilizes a variant of the RODEO algorithm to provide a fast, locally adaptive and interactive approach to the analysis of DIAL measurements. This methodology is motivated by and applied to several simulated examples and a study out of NASA Langley Research Center (LaRC) looking at the estimation of aerosol extinction in the atmosphere. A comparison study of our method against several other algorithms is also presented. References Chaudhuri, P., Marron, J.S., Scale-space view of curve estimation, Annals of Statistics 28 (2000) 408-428. Duong, T., Cowling

  20. Multi channel thermal hydraulic analysis of gas cooled fast reactor using genetic algorithm

    SciTech Connect

    Drajat, R. Z.; Su'ud, Z.; Soewono, E.; Gunawan, A. Y.

    2012-05-22

    There are three analyzes to be done in the design process of nuclear reactor i.e. neutronic analysis, thermal hydraulic analysis and thermodynamic analysis. The focus in this article is the thermal hydraulic analysis, which has a very important role in terms of system efficiency and the selection of the optimal design. This analysis is performed in a type of Gas Cooled Fast Reactor (GFR) using cooling Helium (He). The heat from nuclear fission reactions in nuclear reactors will be distributed through the process of conduction in fuel elements. Furthermore, the heat is delivered through a process of heat convection in the fluid flow in cooling channel. Temperature changes that occur in the coolant channels cause a decrease in pressure at the top of the reactor core. The governing equations in each channel consist of mass balance, momentum balance, energy balance, mass conservation and ideal gas equation. The problem is reduced to finding flow rates in each channel such that the pressure drops at the top of the reactor core are all equal. The problem is solved numerically with the genetic algorithm method. Flow rates and temperature distribution in each channel are obtained here.

  1. Computation of radiation pressure force on arbitrary shaped homogenous particles by multilevel fast multipole algorithm.

    PubMed

    Yang, Minglin; Ren, Kuan Fang; Gou, Mingjiang; Sheng, Xinqing

    2013-06-01

    A full-wave numerical method based on the surface integral equation for computing radiation pressure force (RPF) exerted by a shaped light beam on arbitrary shaped homogenous particles is presented. The multilevel fast multipole algorithm is employed to reduce memory requirement and to improve its capability. The resultant matrix equation is solved by using an iterative solver to obtain equivalent electric and magnetic currents. Then RPF is computed by vector flux of the Maxwell's stress tensor over a spherical surface tightly enclosing the particle. So the analytical expressions for electromagnetic fields of incident beam in near region are used. Some numerical results are performed to illustrate the validity and capability of the developed method. Good agreements between our method and the Lorenz-Mie theory for spherical and small spheroidal particle are found while our method has powerful capability for computing RPF of any shaped beam on a relatively large particle of complex shape. Tests for ellipsoidal and red blood cell-like particles illuminated by Gaussian beam have shown that the size of the particle can be as large as 50-100 wavelengths, respectively, for the relative refractive of 1.33 and 1.1.

  2. HaploGrep: a fast and reliable algorithm for automatic classification of mitochondrial DNA haplogroups.

    PubMed

    Kloss-Brandstätter, Anita; Pacher, Dominic; Schönherr, Sebastian; Weissensteiner, Hansi; Binna, Robert; Specht, Günther; Kronenberg, Florian

    2011-01-01

    An ongoing source of controversy in mitochondrial DNA (mtDNA) research is based on the detection of numerous errors in mtDNA profiles that led to erroneous conclusions and false disease associations. Most of these controversies could be avoided if the samples' haplogroup status would be taken into consideration. Knowing the mtDNA haplogroup affiliation is a critical prerequisite for studying mechanisms of human evolution and discovering genes involved in complex diseases, and validating phylogenetic consistency using haplogroup classification is an important step in quality control. However, despite the availability of Phylotree, a regularly updated classification tree of global mtDNA variation, the process of haplogroup classification is still time-consuming and error-prone, as researchers have to manually compare the polymorphisms found in a population sample to those summarized in Phylotree, polymorphism by polymorphism, sample by sample. We present HaploGrep, a fast, reliable and straight-forward algorithm implemented in a Web application to determine the haplogroup affiliation of thousands of mtDNA profiles genotyped for the entire mtDNA or any part of it. HaploGrep uses the latest version of Phylotree and offers an all-in-one solution for quality assessment of mtDNA profiles in clinical genetics, population genetics and forensics. HaploGrep can be accessed freely at http://haplogrep.uibk.ac.at.

  3. On the applicability of genetic algorithms to fast solar spectropolarimetric inversions for vector magnetography

    NASA Astrophysics Data System (ADS)

    Harker, Brian J.

    The measurement of vector magnetic fields on the sun is one of the most important diagnostic tools for characterizing solar activity. The ubiquitous solar wind is guided into interplanetary space by open magnetic field lines in the upper solar atmosphere. Highly-energetic solar flares and Coronal Mass Ejections (CMEs) are triggered in lower layers of the solar atmosphere by the driving forces at the visible "surface" of the sun, the photosphere. The driving forces there tangle and interweave the vector magnetic fields, ultimately leading to an unstable field topology with large excess magnetic energy, and this excess energy is suddenly and violently released by magnetic reconnection, emitting intense broadband radiation that spans the electromagnetic spectrum, accelerating billions of metric tons of plasma away from the sun, and finally relaxing the magnetic field to lower-energy states. These eruptive flaring events can have severe impacts on the near-Earth environment and the human technology that inhabits it. This dissertation presents a novel inversion method for inferring the properties of the vector magnetic field from telescopic measurements of the polarization states (Stokes vector) of the light received from the sun, in an effort to develop a method that is fast, accurate, and reliable. One of the long-term goals of this work is to develop such a method that is capable of rapidly-producing characterizations of the magnetic field from time-sequential data, such that near real-time projections of the complexity and flare- productivity of solar active regions can be made. This will be a boon to the field of solar flare forecasting, and should help mitigate the harmful effects of space weather on mankind's space-based endeavors. To this end, I have developed an inversion method based on genetic algorithms (GA) that have the potential for achieving such high-speed analysis.

  4. Multilevel fast multipole algorithm for elastic wave scattering by large three-dimensional objects

    NASA Astrophysics Data System (ADS)

    Tong, Mei Song; Chew, Weng Cho

    2009-02-01

    Multilevel fast multipole algorithm (MLFMA) is developed for solving elastic wave scattering by large three-dimensional (3D) objects. Since the governing set of boundary integral equations (BIE) for the problem includes both compressional and shear waves with different wave numbers in one medium, the double-tree structure for each medium is used in the MLFMA implementation. When both the object and surrounding media are elastic, four wave numbers in total and thus four FMA trees are involved. We employ Nyström method to discretize the BIE and generate the corresponding matrix equation. The MLFMA is used to accelerate the solution process by reducing the complexity of matrix-vector product from O(N2) to O(NlogN) in iterative solvers. The multiple-tree structure differs from the single-tree frame in electromagnetics (EM) and acoustics, and greatly complicates the MLFMA implementation due to the different definitions for well-separated groups in different FMA trees. Our Nyström method has made use of the cancellation of leading terms in the series expansion of integral kernels to handle hyper singularities in near terms. This feature is kept in the MLFMA by seeking the common near patches in different FMA trees and treating the involved near terms synergistically. Due to the high cost of the multiple-tree structure, our numerical examples show that we can only solve the elastic wave scattering problems with 0.3-0.4 millions of unknowns on our Dell Precision 690 workstation using one core.

  5. A fast color image enhancement algorithm based on Max Intensity Channel.

    PubMed

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-30

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details. PMID:25110395

  6. A fast color image enhancement algorithm based on Max Intensity Channel

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details.

  7. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lisha

    We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

  8. A Reduced-Complexity Fast Algorithm for Software Implementation of the IFFT/FFT in DMT Systems

    NASA Astrophysics Data System (ADS)

    Chan, Tsun-Shan; Kuo, Jen-Chih; Wu, An-Yeu (Andy)

    2002-12-01

    The discrete multitone (DMT) modulation/demodulation scheme is the standard transmission technique in the application of asymmetric digital subscriber lines (ADSL) and very-high-speed digital subscriber lines (VDSL). Although the DMT can achieve higher data rate compared with other modulation/demodulation schemes, its computational complexity is too high for cost-efficient implementations. For example, it requires 512-point IFFT/FFT as the modulation/demodulation kernel in the ADSL systems and even higher in the VDSL systems. The large block size results in heavy computational load in running programmable digital signal processors (DSPs). In this paper, we derive computationally efficient fast algorithm for the IFFT/FFT. The proposed algorithm can avoid complex-domain operations that are inevitable in conventional IFFT/FFT computation. The resulting software function requires less computational complexity. We show that it acquires only 17% number of multiplications to compute the IFFT and FFT compared with the Cooly-Tukey algorithm. Hence, the proposed fast algorithm is very suitable for firmware development in reducing the MIPS count in programmable DSPs.

  9. A novel algorithm for calling mRNA m6A peaks by modeling biological variances in MeRIP-seq data

    PubMed Central

    Cui, Xiaodong; Meng, Jia; Zhang, Shaowu; Chen, Yidong; Huang, Yufei

    2016-01-01

    Motivation: N6-methyl-adenosine (m6A) is the most prevalent mRNA methylation but precise prediction of its mRNA location is important for understanding its function. A recent sequencing technology, known as Methylated RNA Immunoprecipitation Sequencing technology (MeRIP-seq), has been developed for transcriptome-wide profiling of m6A. We previously developed a peak calling algorithm called exomePeak. However, exomePeak over-simplifies data characteristics and ignores the reads’ variances among replicates or reads dependency across a site region. To further improve the performance, new model is needed to address these important issues of MeRIP-seq data. Results: We propose a novel, graphical model-based peak calling method, MeTPeak, for transcriptome-wide detection of m6A sites from MeRIP-seq data. MeTPeak explicitly models read count of an m6A site and introduces a hierarchical layer of Beta variables to capture the variances and a Hidden Markov model to characterize the reads dependency across a site. In addition, we developed a constrained Newton’s method and designed a log-barrier function to compute analytically intractable, positively constrained Beta parameters. We applied our algorithm to simulated and real biological datasets and demonstrated significant improvement in detection performance and robustness over exomePeak. Prediction results on publicly available MeRIP-seq datasets are also validated and shown to be able to recapitulate the known patterns of m6A, further validating the improved performance of MeTPeak. Availability and implementation: The package ‘MeTPeak’ is implemented in R and C ++, and additional details are available at https://github.com/compgenomics/MeTPeak Contact: yufei.huang@utsa.edu or xdchoi@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307641

  10. Ultra-fast local-haplotype variant calling using paired-end DNA-sequencing data reveals somatic mosaicism in tumor and normal blood samples

    PubMed Central

    Sengupta, Subhajit; Gulukota, Kamalakar; Zhu, Yitan; Ober, Carole; Naughton, Katherine; Wentworth-Sheilds, William; Ji, Yuan

    2016-01-01

    Somatic mosaicism refers to the existence of somatic mutations in a fraction of somatic cells in a single biological sample. Its importance has mainly been discussed in theory although experimental work has started to emerge linking somatic mosaicism to disease diagnosis. Through novel statistical modeling of paired-end DNA-sequencing data using blood-derived DNA from healthy donors as well as DNA from tumor samples, we present an ultra-fast computational pipeline, LocHap that searches for multiple single nucleotide variants (SNVs) that are scaffolded by the same reads. We refer to scaffolded SNVs as local haplotypes (LH). When an LH exhibits more than two genotypes, we call it a local haplotype variant (LHV). The presence of LHVs is considered evidence of somatic mosaicism because a genetically homogeneous cell population will not harbor LHVs. Applying LocHap to whole-genome and whole-exome sequence data in DNA from normal blood and tumor samples, we find wide-spread LHVs across the genome. Importantly, we find more LHVs in tumor samples than in normal samples, and more in older adults than in younger ones. We confirm the existence of LHVs and somatic mosaicism by validation studies in normal blood samples. LocHap is publicly available at http://www.compgenome.org/lochap. PMID:26420835

  11. Ultra-fast local-haplotype variant calling using paired-end DNA-sequencing data reveals somatic mosaicism in tumor and normal blood samples.

    PubMed

    Sengupta, Subhajit; Gulukota, Kamalakar; Zhu, Yitan; Ober, Carole; Naughton, Katherine; Wentworth-Sheilds, William; Ji, Yuan

    2016-02-18

    Somatic mosaicism refers to the existence of somatic mutations in a fraction of somatic cells in a single biological sample. Its importance has mainly been discussed in theory although experimental work has started to emerge linking somatic mosaicism to disease diagnosis. Through novel statistical modeling of paired-end DNA-sequencing data using blood-derived DNA from healthy donors as well as DNA from tumor samples, we present an ultra-fast computational pipeline, LocHap that searches for multiple single nucleotide variants (SNVs) that are scaffolded by the same reads. We refer to scaffolded SNVs as local haplotypes (LH). When an LH exhibits more than two genotypes, we call it a local haplotype variant (LHV). The presence of LHVs is considered evidence of somatic mosaicism because a genetically homogeneous cell population will not harbor LHVs. Applying LocHap to whole-genome and whole-exome sequence data in DNA from normal blood and tumor samples, we find wide-spread LHVs across the genome. Importantly, we find more LHVs in tumor samples than in normal samples, and more in older adults than in younger ones. We confirm the existence of LHVs and somatic mosaicism by validation studies in normal blood samples. LocHap is publicly available at http://www.compgenome.org/lochap. PMID:26420835

  12. Fast voxel and polygon ray-tracing algorithms in intensity modulated radiation therapy treatment planning

    SciTech Connect

    Fox, Christopher; Romeijn, H. Edwin; Dempsey, James F.

    2006-05-15

    We present work on combining three algorithms to improve ray-tracing efficiency in radiation therapy dose computation. The three algorithms include: An improved point-in-polygon algorithm, incremental voxel ray tracing algorithm, and stereographic projection of beamlets for voxel truncation. The point-in-polygon and incremental voxel ray-tracing algorithms have been used in computer graphics and nuclear medicine applications while the stereographic projection algorithm was developed by our group. These algorithms demonstrate significant improvements over the current standard algorithms in peer reviewed literature, i.e., the polygon and voxel ray-tracing algorithms of Siddon for voxel classification (point-in-polygon testing) and dose computation, respectively, and radius testing for voxel truncation. The presented polygon ray-tracing technique was tested on 10 intensity modulated radiation therapy (IMRT) treatment planning cases that required the classification of between 0.58 and 2.0 million voxels on a 2.5 mm isotropic dose grid into 1-4 targets and 5-14 structures represented as extruded polygons (a.k.a. Siddon prisms). Incremental voxel ray tracing and voxel truncation employing virtual stereographic projection was tested on the same IMRT treatment planning cases where voxel dose was required for 230-2400 beamlets using a finite-size pencil-beam algorithm. Between a 100 and 360 fold cpu time improvement over Siddon's method was observed for the polygon ray-tracing algorithm to perform classification of voxels for target and structure membership. Between a 2.6 and 3.1 fold reduction in cpu time over current algorithms was found for the implementation of incremental ray tracing. Additionally, voxel truncation via stereographic projection was observed to be 11-25 times faster than the radial-testing beamlet extent approach and was further improved 1.7-2.0 fold through point-classification using the method of translation over the cross product technique.

  13. Effective Analysis of NGS Metagenomic Data with Ultra-Fast Clustering Algorithms (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Li, Weizhong [San Diego Supercomputer Center

    2016-07-12

    San Diego Supercomputer Center's Weizhong Li on "Effective Analysis of NGS Metagenomic Data with Ultra-fast Clustering Algorithms" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  14. Effective Analysis of NGS Metagenomic Data with Ultra-Fast Clustering Algorithms (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    SciTech Connect

    Li, Weizhong

    2011-10-12

    San Diego Supercomputer Center's Weizhong Li on "Effective Analysis of NGS Metagenomic Data with Ultra-fast Clustering Algorithms" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  15. LGH: A Fast and Accurate Algorithm for Single Individual Haplotyping Based on a Two-Locus Linkage Graph.

    PubMed

    Xie, Minzhu; Wang, Jianxin; Chen, Xin

    2015-01-01

    Phased haplotype information is crucial in our complete understanding of differences between individuals at the genetic level. Given a collection of DNA fragments sequenced from a homologous pair of chromosomes, the problem of single individual haplotyping (SIH) aims to reconstruct a pair of haplotypes using a computer algorithm. In this paper, we encode the information of aligned DNA fragments into a two-locus linkage graph and approach the SIH problem by vertex labeling of the graph. In order to find a vertex labeling with the minimum sum of weights of incompatible edges, we develop a fast and accurate heuristic algorithm. It starts with detecting error-tolerant components by an adapted breadth-first search. A proper labeling of vertices is then identified for each component, with which sequencing errors are further corrected and edge weights are adjusted accordingly. After contracting each error-tolerant component into a single vertex, the above procedure is iterated on the resulting condensed linkage graph until error-tolerant components are no longer detected. The algorithm finally outputs a haplotype pair based on the vertex labeling. Extensive experiments on simulated and real data show that our algorithm is more accurate and faster than five existing algorithms for single individual haplotyping. PMID:26671798

  16. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  17. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-04-22

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features.

  18. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    PubMed Central

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  19. Fast augmentation algorithms for maximising the output flow in repairable flow networks after edge failures

    NASA Astrophysics Data System (ADS)

    Todinov, M. T.

    2013-10-01

    The article discuses a number of fundamental results related to determining the maximum output flow in a network after edge failures. On the basis of four theorems, we propose very efficient augmentation algorithms for restoring the maximum possible output flow in a repairable flow network, after an edge failure. In many cases, the running time of the proposed algorithm is independent of the size of the network or varies linearly with the size of the network. The high computational speed of the proposed algorithms makes them suitable for optimising the performance of repairable flow networks in real time and for decongesting overloaded branches in networks. We show that the correct algorithm for maximising the flow in a static flow network, with edges fully saturated with flow, is a special case of the proposed reoptimisation algorithm, after transforming the network into a network with balanced nodes. An efficient two-stage augmentation algorithm has also been proposed for maximising the output flow in a network with empty edges. The algorithm is faster than the classical flow augmentation algorithms. The article also presents a study on the link between performance, topology and size of repairable flow networks by using a specially developed software tool. The topology of repairable flow networks has a significant impact on their performance. Two networks built with identical type and number of components can have very different performance levels because of slight differences in their topology.

  20. FPGA design and implementation of a fast pixel purity index algorithm for endmember extraction in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Valencia, David; Plaza, Antonio; Vega-Rodríguez, Miguel A.; Pérez, Rosa M.

    2005-11-01

    Hyperspectral imagery is a class of image data which is used in many scientific areas, most notably, medical imaging and remote sensing. It is characterized by a wealth of spatial and spectral information. Over the last years, many algorithms have been developed with the purpose of finding "spectral endmembers," which are assumed to be pure signatures in remotely sensed hyperspectral data sets. Such pure signatures can then be used to estimate the abundance or concentration of materials in mixed pixels, thus allowing sub-pixel analysis which is crucial in many remote sensing applications due to current sensor optics and configuration. One of the most popular endmember extraction algorithms has been the pixel purity index (PPI), available from Kodak's Research Systems ENVI software package. This algorithm is very time consuming, a fact that has generally prevented its exploitation in valid response times in a wide range of applications, including environmental monitoring, military applications or hazard and threat assessment/tracking (including wildland fire detection, oil spill mapping and chemical and biological standoff detection). Field programmable gate arrays (FPGAs) are hardware components with millions of gates. Their reprogrammability and high computational power makes them particularly attractive in remote sensing applications which require a response in near real-time. In this paper, we present an FPGA design for implementation of PPI algorithm which takes advantage of a recently developed fast PPI (FPPI) algorithm that relies on software-based optimization. The proposed FPGA design represents our first step toward the development of a new reconfigurable system for fast, onboard analysis of remotely sensed hyperspectral imagery.

  1. A very fast algorithm for simultaneously performing connected-component labeling and euler number computing.

    PubMed

    He, Lifeng; Chao, Yuyan

    2015-09-01

    Labeling connected components and calculating the Euler number in a binary image are two fundamental processes for computer vision and pattern recognition. This paper presents an ingenious method for identifying a hole in a binary image in the first scan of connected-component labeling. Our algorithm can perform connected component labeling and Euler number computing simultaneously, and it can also calculate the connected component (object) number and the hole number efficiently. The additional cost for calculating the hole number is only O(H) , where H is the hole number in the image. Our algorithm can be implemented almost in the same way as a conventional equivalent-label-set-based connected-component labeling algorithm. We prove the correctness of our algorithm and use experimental results for various kinds of images to demonstrate the power of our algorithm.

  2. Fast impedance measurements at very low frequencies using curve fitting algorithms

    NASA Astrophysics Data System (ADS)

    Piasecki, Tomasz

    2015-06-01

    The method for reducing the time of impedance measurements at very low frequencies was proposed and implemented. The reduction was achieved by using impedance estimation algorithms that do not require the acquisition of the momentary voltage and current values for at least one whole period of the excitation signal. The algorithms were based on direct least squares ellipse and sine fitting to recorded waveforms. The performance of the algorithms was evaluated based on the sampling time, signal-to-noise (S/N) ratio and sampling frequency using a series of Monte Carlo experiments. An improved algorithm for the detection of the ellipse direction was implemented and compared to a voting algorithm. The sine fitting algorithm provided significantly better results. It was less sensitive to the sampling start point and measured impedance argument and did not exhibit any systematic error of impedance estimation. It allowed a significant reduction of the measurement time. A 1% standard deviation of impedance estimation was achieved using a sine fitting algorithm with a measurement time reduced to 11% of the excitation signal period.

  3. Fast two-dimensional super-resolution image reconstruction algorithm for ultra-high emitter density.

    PubMed

    Huang, Jiaqing; Gumpper, Kristyn; Chi, Yuejie; Sun, Mingzhai; Ma, Jianjie

    2015-07-01

    Single-molecule localization microscopy achieves sub-diffraction-limit resolution by localizing a sparse subset of stochastically activated emitters in each frame. Its temporal resolution is limited by the maximal emitter density that can be handled by the image reconstruction algorithms. Multiple algorithms have been developed to accurately locate the emitters even when they have significant overlaps. Currently, compressive-sensing-based algorithm (CSSTORM) achieves the highest emitter density. However, CSSTORM is extremely computationally expensive, which limits its practical application. Here, we develop a new algorithm (MempSTORM) based on two-dimensional spectrum analysis. With the same localization accuracy and recall rate, MempSTORM is 100 times faster than CSSTORM with ℓ(1)-homotopy. In addition, MempSTORM can be implemented on a GPU for parallelism, which can further increase its computational speed and make it possible for online super-resolution reconstruction of high-density emitters.

  4. Fast computing global structural balance in signed networks based on memetic algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Yixiang; Du, Haifeng; Gong, Maoguo; Ma, Lijia; Wang, Shanfeng

    2014-12-01

    Structural balance is a large area of study in signed networks, and it is intrinsically a global property of the whole network. Computing global structural balance in signed networks, which has attracted some attention in recent years, is to measure how unbalanced a signed network is and it is a nondeterministic polynomial-time hard problem. Many approaches are developed to compute global balance. However, the results obtained by them are partial and unsatisfactory. In this study, the computation of global structural balance is solved as an optimization problem by using the Memetic Algorithm. The optimization algorithm, named Meme-SB, is proposed to optimize an evaluation function, energy function, which is used to compute a distance to exact balance. Our proposed algorithm combines Genetic Algorithm and a greedy strategy as the local search procedure. Experiments on social and biological networks show the excellent effectiveness and efficiency of the proposed method.

  5. A fast random walk algorithm for computing the pulsed-gradient spin-echo signal in multiscale porous media.

    PubMed

    Grebenkov, Denis S

    2011-02-01

    A new method for computing the signal attenuation due to restricted diffusion in a linear magnetic field gradient is proposed. A fast random walk (FRW) algorithm for simulating random trajectories of diffusing spin-bearing particles is combined with gradient encoding. As random moves of a FRW are continuously adapted to local geometrical length scales, the method is efficient for simulating pulsed-gradient spin-echo experiments in hierarchical or multiscale porous media such as concrete, sandstones, sedimentary rocks and, potentially, brain or lungs. PMID:21159532

  6. A fast random walk algorithm for computing the pulsed-gradient spin-echo signal in multiscale porous media.

    PubMed

    Grebenkov, Denis S

    2011-02-01

    A new method for computing the signal attenuation due to restricted diffusion in a linear magnetic field gradient is proposed. A fast random walk (FRW) algorithm for simulating random trajectories of diffusing spin-bearing particles is combined with gradient encoding. As random moves of a FRW are continuously adapted to local geometrical length scales, the method is efficient for simulating pulsed-gradient spin-echo experiments in hierarchical or multiscale porous media such as concrete, sandstones, sedimentary rocks and, potentially, brain or lungs.

  7. Deblending of Simultaneous-source Seismic Data using Fast Iterative Shrinkage-thresholding Algorithm with Firm-thresholding

    NASA Astrophysics Data System (ADS)

    Qu, Shan; Zhou, Hui; Liu, Renwu; Chen, Yangkang; Zu, Shaohuan; Yu, Sa; Yuan, Jiang; Yang, Yahui

    2016-08-01

    In this paper, an improved algorithm is proposed to separate blended seismic data. We formulate the deblending problem as a regularization problem in both common receiver domain and frequency domain. It is suitable for different kinds of coding methods such as random time delay discussed in this paper. Two basic approximation frames, which are iterative shrinkage-thresholding algorithm (ISTA) and fast iterative shrinkage-thresholding algorithm (FISTA), are compared. We also derive the Lipschitz constant used in approximation frames. In order to achieve a faster convergence and higher accuracy, we propose to use firm-thresholding function as the thresholding function in ISTA and FISTA. Two synthetic blended examples demonstrate that the performances of four kinds of algorithms (ISTA with soft- and firm-thresholding, FISTA with soft- and firm-thresholding) are all effective, and furthermore FISTA with a firm-thresholding operator exhibits the most robust behavior. Finally, we show one numerically blended field data example processed by FISTA with firm-thresholding function.

  8. A Genetic Algorithm Based Augmented Lagrangian Method for Computationally Fast Constrained Optimization

    NASA Astrophysics Data System (ADS)

    Srivastava, Soumil; Deb, Kalyanmoy

    Among the penalty based approaches for constrained optimization, Augmented Lagrangian (AL) methods are better in at least three ways: (i) they have theoretical convergence properties, (ii) they distort the original objective function minimally to allow a better search behavior, and (iii) they can find the optimal Lagrange multiplier for each constraint as a by-product of optimization. Instead of keeping a constant penalty parameter throughout the optimization process, these algorithms update the parameters adaptively so that the corresponding penalized function dynamically changes its optimum from the unconstrained minimum point to the constrained minimum point with iterations. However, the flip side of these algorithms is that the overall algorithm is a serial implementation of a number of optimization tasks, a process that is usually time-consuming. In this paper, we devise a genetic algorithm based parameter update strategy to a particular AL method. The strategy is self-adaptive in order to make the overall genetic algorithm based augmented Lagrangian (GAAL) method parameter-free. The GAAL method is applied to a number of constrained test problems taken from the EA literature. The function evaluations required by GAAL in many problems is an order or more lower than existing methods.

  9. A fast and Robust Algorithm for general inequality/equality constrained minimum time problems

    SciTech Connect

    Briessen, B.; Sadegh, N.

    1995-12-01

    This paper presents a new algorithm for solving general inequality/equality constrained minimum time problems. The algorithm`s solution time is linear in the number of Runge-Kutta steps and the number of parameters used to discretize the control input history. The method is being applied to a three link redundant robotic arm with torque bounds, joint angle bounds, and a specified tip path. It solves case after case within a graphical user interface in which the user chooses the initial joint angles and the tip path with a mouse. Solve times are from 30 to 120 seconds on a Hewlett Packard workstation. A zero torque history is always used in the initial guess, and the algorithm has never crashed, indicating its robustness. The algorithm solves for a feasible solution for large trajectory execution time t{sub f} and then reduces t{sub f} and then reduces t{sub f} by a small amount and re-solves. The fixed time re- solve uses a new method of finding a near-minimum-2-norm solution to a set of linear equations and inequalities that achieves quadratic convegence to a feasible solution of the full nonlinear problem.

  10. Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.

  11. Differential sampling for fast frequency acquisition via adaptive extended least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra

    1987-01-01

    This paper presents a differential signal model along with appropriate sampling techinques for least squares estimation of the frequency and frequency derivatives and possibly the phase and amplitude of a sinusoid received in the presence of noise. The proposed algorithm is recursive in mesurements and thus the computational requirement increases only linearly with the number of measurements. The dimension of the state vector in the proposed algorithm does not depend upon the number of measurements and is quite small, typically around four. This is an advantage when compared to previous algorithms wherein the dimension of the state vector increases monotonically with the product of the frequency uncertainty and the observation period. Such a computational simplification may possibly result in some loss of optimality. However, by applying the sampling techniques of the paper such a possible loss in optimality can made small.

  12. An adaptive immune optimization algorithm with dynamic lattice searching operation for fast optimization of atomic clusters

    NASA Astrophysics Data System (ADS)

    Wu, Xia; Wu, Genhua

    2014-08-01

    Geometrical optimization of atomic clusters is performed by a development of adaptive immune optimization algorithm (AIOA) with dynamic lattice searching (DLS) operation (AIOA-DLS method). By a cycle of construction and searching of the dynamic lattice (DL), DLS algorithm rapidly makes the clusters more regular and greatly reduces the potential energy. DLS can thus be used as an operation acting on the new individuals after mutation operation in AIOA to improve the performance of the AIOA. The AIOA-DLS method combines the merit of evolutionary algorithm and idea of dynamic lattice. The performance of the proposed method is investigated in the optimization of Lennard-Jones clusters within 250 atoms and silver clusters described by many-body Gupta potential within 150 atoms. Results reported in the literature are reproduced, and the motif of Ag61 cluster is found to be stacking-fault face-centered cubic, whose energy is lower than that of previously obtained icosahedron.

  13. A fast map merging algorithm in the field of multirobot SLAM.

    PubMed

    Liu, Yanli; Fan, Xiaoping; Zhang, Heng

    2013-01-01

    In recent years, the research on single-robot simultaneous localization and mapping (SLAM) has made a great success. However, multirobot SLAM faces many challenging problems, including unknown robot poses, unshared map, and unstable communication. In this paper, a map merging algorithm based on virtual robot motion is proposed for multi-robot SLAM. The thinning algorithm is used to construct the skeleton of the grid map's empty area, and a mobile robot is simulated in one map. The simulated data is used as information sources in the other map to do partial map Monte Carlo localization; if localization succeeds, the relative pose hypotheses between the two maps can be computed easily. We verify these hypotheses using the rendezvous technique and use them as initial values to optimize the estimation by a heuristic random search algorithm.

  14. A fast map merging algorithm in the field of multirobot SLAM.

    PubMed

    Liu, Yanli; Fan, Xiaoping; Zhang, Heng

    2013-01-01

    In recent years, the research on single-robot simultaneous localization and mapping (SLAM) has made a great success. However, multirobot SLAM faces many challenging problems, including unknown robot poses, unshared map, and unstable communication. In this paper, a map merging algorithm based on virtual robot motion is proposed for multi-robot SLAM. The thinning algorithm is used to construct the skeleton of the grid map's empty area, and a mobile robot is simulated in one map. The simulated data is used as information sources in the other map to do partial map Monte Carlo localization; if localization succeeds, the relative pose hypotheses between the two maps can be computed easily. We verify these hypotheses using the rendezvous technique and use them as initial values to optimize the estimation by a heuristic random search algorithm. PMID:24302855

  15. A Variable Splitting based Algorithm for Fast Multi-Coil Blind Compressed Sensing MRI reconstruction

    PubMed Central

    Bhave, Sampada; Lingala, Sajan Goud; Jacob, Mathews

    2015-01-01

    Recent work on blind compressed sensing (BCS) has shown that exploiting sparsity in dictionaries that are learnt directly from the data at hand can outperform compressed sensing (CS) that uses fixed dictionaries. A challenge with BCS however is the large computational complexity during its optimization, which limits its practical use in several MRI applications. In this paper, we propose a novel optimization algorithm that utilize variable splitting strategies to significantly improve the convergence speed of the BCS optimization. The splitting allows us to efficiently decouple the sparse coefficient, and dictionary update steps from the data fidelity term, resulting in subproblems that take closed form analytical solutions, which otherwise require slower iterative conjugate gradient algorithms. Through experiments on multi coil parametric MRI data, we demonstrate the superior performance of BCS, while achieving convergence speed up factors of over 15 fold over the previously proposed implementation of the BCS algorithm. PMID:25570473

  16. A fast and accurate implementation of tunable algorithms used for generation of fractal-like aggregate models

    NASA Astrophysics Data System (ADS)

    Skorupski, Krzysztof; Mroczka, Janusz; Wriedt, Thomas; Riefler, Norbert

    2014-06-01

    In many branches of science experiments are expensive, require specialist equipment or are very time consuming. Studying the light scattering phenomenon by fractal aggregates can serve as an example. Light scattering simulations can overcome these problems and provide us with theoretical, additional data which complete our study. For this reason a fractal-like aggregate model as well as fast aggregation codes are needed. Until now various computer models, that try to mimic the physics behind this phenomenon, have been developed. However, their implementations are mostly based on a trial-and-error procedure. Such approach is very time consuming and the morphological parameters of resulting aggregates are not exact because the postconditions (e.g. the position error) cannot be very strict. In this paper we present a very fast and accurate implementation of a tunable aggregation algorithm based on the work of Filippov et al. (2000). Randomization is reduced to its necessary minimum (our technique can be more than 1000 times faster than standard algorithms) and the position of a new particle, or a cluster, is calculated with algebraic methods. Therefore, the postconditions can be extremely strict and the resulting errors negligible (e.g. the position error can be recognized as non-existent). In our paper two different methods, which are based on the particle-cluster (PC) and the cluster-cluster (CC) aggregation processes, are presented.

  17. A fast hidden line algorithm with contour option. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Thue, R. E.

    1984-01-01

    The JonesD algorithm was modified to allow the processing of N-sided elements and implemented in conjunction with a 3-D contour generation algorithm. The total hidden line and contour subsystem is implemented in the MOVIE.BYU Display package, and is compared to the subsystems already existing in the MOVIE.BYU package. The comparison reveals that the modified JonesD hidden line and contour subsystem yields substantial processing time savings, when processing moderate sized models comprised of 1000 elements or less. There are, however, some limitations to the modified JonesD subsystem.

  18. Fast conjugate gradient algorithm extension for analyzer-based imaging reconstruction

    NASA Astrophysics Data System (ADS)

    Caudevilla, Oriol; Brankov, Jovan G.

    2016-04-01

    This paper presents an extension of the classic Conjugate Gradient Algorithm. Motivated by the Analyzer-Based Imaging inverse problem, the novel method maximizes the Poisson regularized log-likelihood with a non-linear transformation of parameter faster than other solutions. The new approach takes advantage of the special properties of the Poisson log-likelihood to conjugate each ascend direction with respect all the previous directions taken by the algorithm. Our solution is compared with the general solution for non-quadratic unconstrained problems: the Polak- Ribiere formula. Both methods are applied to the ABI reconstruction problem.

  19. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition.

    PubMed

    Herráez, Miguel Arevallilo; Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-12-10

    We describe what is to our knowledge a novel approach to phase unwrapping. Using the principle of unwrapping following areas with similar phase values (homogenous areas), the algorithm reacts satisfactorily to random noise and breaks in the wrap distributions. Execution times for a 512 x 512 pixel phase distribution are in the order of a half second on a desktop computer. The precise value depends upon the particular image under analysis. Two inherent parameters allow tuning of the algorithm to images of different quality and nature. PMID:12502302

  20. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition.

    PubMed

    Herráez, Miguel Arevallilo; Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-12-10

    We describe what is to our knowledge a novel approach to phase unwrapping. Using the principle of unwrapping following areas with similar phase values (homogenous areas), the algorithm reacts satisfactorily to random noise and breaks in the wrap distributions. Execution times for a 512 x 512 pixel phase distribution are in the order of a half second on a desktop computer. The precise value depends upon the particular image under analysis. Two inherent parameters allow tuning of the algorithm to images of different quality and nature.

  1. Movie approximation technique for the implementation of fast bandwidth-smoothing algorithms

    NASA Astrophysics Data System (ADS)

    Feng, Wu-chi; Lam, Chi C.; Liu, Ming

    1997-12-01

    Bandwidth smoothing algorithms can effectively reduce the network resource requirements for the delivery of compressed video streams. For stored video, a large number of bandwidth smoothing algorithms have been introduced that are optimal under certain constraints but require access to all the frame size data in order to achieve their optimal properties. This constraint, however, can be both resource and computationally expensive, especially for moderately priced set-top-boxes. In this paper, we introduce a movie approximation technique for the representation of the frame sizes of a video, reducing the complexity of the bandwidth smoothing algorithms and the amount of frame data that must be transmitted prior to the start of playback. Our results show that the proposed approximation technique can accurately approximate the frame data with a small number of piece-wise linear segments without affecting the performance measures that the bandwidth soothing algorithms are attempting to achieve by more than 1%. In addition, we show that implementations of this technique can speed up execution times by 100 to 400 times, allowing the bandwidth plan calculation times to be reduced to tens of milliseconds. Evaluation using a compressed full-length motion-JPEG video is provided.

  2. Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm

    SciTech Connect

    Manzer, Samuel; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Martin

    2015-07-14

    Construction of the exact exchange matrix, K, is typically the rate-determining step in hybrid density functional theory, and therefore, new approaches with increased efficiency are highly desirable. We present a framework with potential for greatly improved efficiency by computing a compressed exchange matrix that yields the exact exchange energy, gradient, and direct inversion of the iterative subspace (DIIS) error vector. The compressed exchange matrix is constructed with one index in the compact molecular orbital basis and the other index in the full atomic orbital basis. To illustrate the advantages, we present a practical algorithm that uses this framework in conjunction with the resolution of the identity (RI) approximation. We demonstrate that convergence using this method, referred to hereafter as occupied orbital RI-K (occ-RI-K), in combination with the DIIS algorithm is well-behaved, that the accuracy of computed energetics is excellent (identical to conventional RI-K), and that significant speedups can be obtained over existing integral-direct and RI-K methods. For a 4400 basis function C{sub 68}H{sub 22} hydrogen-terminated graphene fragment, our algorithm yields a 14 × speedup over the conventional algorithm and a speedup of 3.3 × over RI-K.

  3. Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm

    PubMed Central

    Manzer, Samuel; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Martin

    2015-01-01

    Construction of the exact exchange matrix, K, is typically the rate-determining step in hybrid density functional theory, and therefore, new approaches with increased efficiency are highly desirable. We present a framework with potential for greatly improved efficiency by computing a compressed exchange matrix that yields the exact exchange energy, gradient, and direct inversion of the iterative subspace (DIIS) error vector. The compressed exchange matrix is constructed with one index in the compact molecular orbital basis and the other index in the full atomic orbital basis. To illustrate the advantages, we present a practical algorithm that uses this framework in conjunction with the resolution of the identity (RI) approximation. We demonstrate that convergence using this method, referred to hereafter as occupied orbital RI-K (occ-RI-K), in combination with the DIIS algorithm is well-behaved, that the accuracy of computed energetics is excellent (identical to conventional RI-K), and that significant speedups can be obtained over existing integral-direct and RI-K methods. For a 4400 basis function C68H22 hydrogen-terminated graphene fragment, our algorithm yields a 14 × speedup over the conventional algorithm and a speedup of 3.3 × over RI-K. PMID:26178096

  4. A fast multigrid algorithm for energy minimization under planar density constraints.

    SciTech Connect

    Ron, D.; Safro, I.; Brandt, A.; Mathematics and Computer Science; Weizmann Inst. of Science

    2010-09-07

    The two-dimensional layout optimization problem reinforced by the efficient space utilization demand has a wide spectrum of practical applications. Formulating the problem as a nonlinear minimization problem under planar equality and/or inequality density constraints, we present a linear time multigrid algorithm for solving a correction to this problem. The method is demonstrated in various graph drawing (visualization) instances.

  5. General purpose algorithms for characterization of slow and fast phase nystagmus

    NASA Technical Reports Server (NTRS)

    Lessard, Charles S.

    1987-01-01

    In the overall aim for a better understanding of the vestibular and optokinetic systems and their roles in space motion sickness, the eye movement responses to various dynamic stimuli are measured. The vestibulo-ocular reflex (VOR) and the optokinetic response, as the eye movement responses are known, consist of slow phase and fast phase nystagmus. The specific objective is to develop software programs necessary to characterize the vestibulo-ocular and optokinetic responses by distinguishing between the two phases of nystagmus. The overall program is to handle large volumes of highly variable data with minimum operator interaction. The programs include digital filters, differentiation, identification of fast phases, and reconstruction of the slow phase with a least squares fit such that sinusoidal or psuedorandom data may be processed with accurate results. The resultant waveform, slow phase velocity eye movements, serves as input data to the spectral analysis programs previously developed for NASA to analyze nystagmus responses to pseudorandom angular velocity inputs.

  6. House calls.

    PubMed

    Unwin, Brian K; Tatum, Paul E

    2011-04-15

    House calls provide a unique perspective on patients' environment and health problems. The demand for house calls is expected to increase considerably in future decades as the U.S. population ages. Although study results have been inconsistent, house calls involving multidisciplinary teams may reduce hospital readmissions and long-term care facility stays. Common indications for house calls are management of acute or chronic illnesses, and palliative care. Medicare beneficiaries must meet specific criteria to be eligible for home health services. The INHOMESSS mnemonic provides a checklist for components of a comprehensive house call. In addition to performing a clinical assessment, house calls may involve observing the patient performing daily activities, reconciling medication discrepancies, and evaluating home safety. House calls can be integrated into practice with careful planning, including clustering house calls by geographic location and coordinating visits with other health care professionals and agencies.

  7. Development of a radiation-hardened SRAM with EDAC algorithm for fast readout CMOS pixel sensors for charged particle tracking

    NASA Astrophysics Data System (ADS)

    Wei, X.; Li, B.; Chen, N.; Wang, J.; Zheng, R.; Gao, W.; Wei, T.; Gao, D.; Hu, Y.

    2014-08-01

    CMOS pixel sensors (CPS) are attractive for use in the innermost particle detectors for charged particle tracking due to their good trade-off between spatial resolution, material budget, radiation hardness, and readout speed. With the requirements of high readout speed and high radiation hardness to total ionizing dose (TID) for particle tracking, fast readout CPS are composed by integrating a data compression block and two SRAM IP cores. However, the radiation hardness of the SRAM IP cores is not as high as that of the other parts in CPS, and thus the radiation hardness of the whole CPS chip is lowered. Especially, when CPS are migrated into 0.18-μm processes, the single event upset (SEU) effects should be also considered besides TID and single event latchup (SEL) effects. This paper presents a radiation-hardened SRAM with enhanced radiation hardness to SEU. An error detection and correction (EDAC) algorithm and a bit-interleaving storage strategy are adopted in the design. The prototype design has been fabricated in a 0.18-μm process. The area of the new SRAM is increased 1.6 times as compared with a non-radiation-hardened SRAM due to the integration of EDAC algorithm and the adoption of radiation hardened layout. The access time is increased from 5 ns to 8 ns due to the integration of EDAC algorithm. The test results indicate that the design satisfy requirements of CPS for charged particle tracking.

  8. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    PubMed

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations.

  9. A graphical algorithm for fast computation of identity coefficients and generalized kinship coefficients

    PubMed Central

    Abney, Mark

    2009-01-01

    Summary: Computing the probability of identity by descent sharing among n genes given only the pedigree of those genes is a computationally challenging problem, if n or the pedigree size is large. Here, I present a novel graphical algorithm for efficiently computing all generalized kinship coefficients for n genes. The graphical description transforms the problem from doing many recursion on the pedigree to doing a single traversal of a structure referred to as the kinship graph. Availability: The algorithm is implemented for n = 4 in the software package IdCoefs at http://home.uchicago.edu/abney/Software.html. Contact: abney@bsd.uchicago.edu Supplementary Information:Supplementary data are available at Bioinformatics online. PMID:19359355

  10. A fast, flexible algorithm for calculating correlations in Fluorescence Correlation Spectroscopy

    SciTech Connect

    Laurence, T; Fore, S; Huser, T

    2005-10-13

    A new algorithm is introduced for computing correlations of photon arrival time data acquired in single-molecule fluorescence spectroscopy and fluorescence correlation spectroscopy (FCS). The correlation is first rewritten as a counting operation on photon pairs. For each photon, the contribution to the correlation function for each subsequent photon is calculated for arbitrary bin spacings of the correlation time lag. By retaining the bin positions in the photon sequence after each photon, the correlation can be performed efficiently. Example correlations for simulations of FCS experiments are shown, with comparable execution speed to the commonly used multiple-tau correlation technique. Also, wide bin spacings are possible that allow for real-time software calculation of correlations even for high count rates ({approx}350 kHz). The flexibility and broad applicability of the algorithm is demonstrated using results from single molecule photon antibunching experiments.

  11. Fast algorithm for computing the Abel inversion integral in broadband reflectometry

    SciTech Connect

    Nunes, F.D.

    1995-10-01

    The application of the Hansen--Jablokow recursive technique is proposed for the numerical computation of the Abel inversion integral which is used in ({ital O}-mode) frequency-modulated broadband reflectometry to evaluate plasma density profiles. Compared to the usual numerical methods the recursive algorithm allows substantial time savings that can be important when processing massive amounts of data aiming to control the plasma in real time. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.

  12. Fast parallel algorithms that compute transitive closure of a fuzzy relation

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.

    1993-01-01

    The notion of a transitive closure of a fuzzy relation is very useful for clustering in pattern recognition, for fuzzy databases, etc. The original algorithm proposed by L. Zadeh (1971) requires the computation time O(n(sup 4)), where n is the number of elements in the relation. In 1974, J. C. Dunn proposed a O(n(sup 2)) algorithm. Since we must compute n(n-1)/2 different values s(a, b) (a not equal to b) that represent the fuzzy relation, and we need at least one computational step to compute each of these values, we cannot compute all of them in less than O(n(sup 2)) steps. So, Dunn's algorithm is in this sense optimal. For small n, it is ok. However, for big n (e.g., for big databases), it is still a lot, so it would be desirable to decrease the computation time (this problem was formulated by J. Bezdek). Since this decrease cannot be done on a sequential computer, the only way to do it is to use a computer with several processors working in parallel. We show that on a parallel computer, transitive closure can be computed in time O((log(sub 2)(n))2).

  13. A fast iterated conditional modes algorithm for water-fat decomposition in MRI.

    PubMed

    Huang, Fangping; Narayan, Sreenath; Wilson, David; Johnson, David; Zhang, Guo-Qiang

    2011-08-01

    Decomposition of water and fat in magnetic resonance imaging (MRI) is important for biomedical research and clinical applications. In this paper, we propose a two-phased approach for the three-point water-fat decomposition problem. Our contribution consists of two components: 1) a background-masked Markov random field (MRF) energy model to formulate the local smoothness of field inhomogeneity; 2) a new iterated conditional modes (ICM) algorithm accounting for high-performance optimization of the MRF energy model. The MRF energy model is integrated with background masking to prevent error propagation of background estimates as well as improve efficiency. The central component of our new ICM algorithm is the stability tracking (ST) mechanism intended to dynamically track iterative stability on pixels so that computation per iteration is performed only on instable pixels. The ST mechanism significantly improves the efficiency of ICM. We also develop a median-based initialization algorithm to provide good initial guesses for ICM iterations, and an adaptive gradient-based scheme for parametric configuration of the MRF model. We evaluate the robust of our approach with high-resolution mouse datasets acquired from 7T MRI. PMID:21402510

  14. Validation of Supervised Automated Algorithm for Fast Quantitative Evaluation of Organ Motion on Magnetic Resonance Imaging

    SciTech Connect

    Prakash, Varuna; Stainsby, Jeffrey A.; Satkunasingham, Janakan; Craig, Tim; Catton, Charles; Chan, Philip; Dawson, Laura; Hensel, Jennifer; Jaffray, David; Milosevic, Michael; Nichol, Alan; Sussman, Marshall S.; Lockwood, Gina; Menard, Cynthia

    2008-07-15

    Purpose: To validate a correlation coefficient template-matching algorithm applied to the supervised automated quantification of abdominal-pelvic organ motion captured on time-resolved magnetic resonance imaging. Methods and Materials: Magnetic resonance images of 21 patients across four anatomic sites were analyzed. Representative anatomic points of interest were chosen as surrogates for organ motion. The point of interest displacements across each image frame relative to baseline were quantified manually and through the use of a template-matching software tool, termed 'Motiontrack.' Automated and manually acquired displacement measures, as well as the standard deviation of intrafraction motion, were compared for each image frame and for each patient. Results: Discrepancies between the automated and manual displacements of {>=}2 mm were uncommon, ranging in frequency of 0-9.7% (liver and prostate, respectively). The standard deviations of intrafraction motion measured with each method correlated highly (r = 0.99). Considerable interpatient variability in organ motion was demonstrated by a wide range of standard deviations in the liver (1.4-7.5 mm), uterus (1.1-8.4 mm), and prostate gland (0.8-2.7 mm). The automated algorithm performed successfully in all patients but 1 and substantially improved efficiency compared with manual quantification techniques (5 min vs. 60-90 min). Conclusion: Supervised automated quantification of organ motion captured on magnetic resonance imaging using a correlation coefficient template-matching algorithm was efficient, accurate, and may play an important role in off-line adaptive approaches to intrafraction motion management.

  15. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization.

  16. Fast internal marker tracking algorithm for onboard MV and kV imaging systems.

    PubMed

    Mao, W; Wiersma, R D; Xing, L

    2008-05-01

    Intrafraction organ motion can limit the advantage of highly conformal dose techniques such as intensity modulated radiation therapy (IMRT) due to target position uncertainty. To ensure high accuracy in beam targeting, real-time knowledge of the target location is highly desired throughout the beam delivery process. This knowledge can be gained through imaging of internally implanted radio-opaque markers with fluoroscopic or electronic portal imaging devices (EPID). In the case of MV based images, marker detection can be problematic due to the significantly lower contrast between different materials in comparison to their kV-based counterparts. This work presents a fully automated algorithm capable of detecting implanted metallic markers in both kV and MV images with high consistency. Using prior CT information, the algorithm predefines the volumetric search space without manual region-of-interest (ROI) selection by the user. Depending on the template selected, both spherical and cylindrical markers can be detected. Multiple markers can be simultaneously tracked without indexing confusion. Phantom studies show detection success rates of 100% for both kV and MV image data. In addition, application of the algorithm to real patient image data results in successful detection of all implanted markers for MV images. Near real-time operational speeds of approximately 10 frames/sec for the detection of five markers in a 1024 x 768 image are accomplished using an ordinary PC workstation.

  17. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  18. Fast internal marker tracking algorithm for onboard MV and kV imaging systems

    PubMed Central

    Mao, W.; Wiersma, R. D.; Xing, L.

    2008-01-01

    Intrafraction organ motion can limit the advantage of highly conformal dose techniques such as intensity modulated radiation therapy (IMRT) due to target position uncertainty. To ensure high accuracy in beam targeting, real-time knowledge of the target location is highly desired throughout the beam delivery process. This knowledge can be gained through imaging of internally implanted radio-opaque markers with fluoroscopic or electronic portal imaging devices (EPID). In the case of MV based images, marker detection can be problematic due to the significantly lower contrast between different materials in comparison to their kV-based counterparts. This work presents a fully automated algorithm capable of detecting implanted metallic markers in both kV and MV images with high consistency. Using prior CT information, the algorithm predefines the volumetric search space without manual region-of-interest (ROI) selection by the user. Depending on the template selected, both spherical and cylindrical markers can be detected. Multiple markers can be simultaneously tracked without indexing confusion. Phantom studies show detection success rates of 100% for both kV and MV image data. In addition, application of the algorithm to real patient image data results in successful detection of all implanted markers for MV images. Near real-time operational speeds of ∼10 frames∕sec for the detection of five markers in a 1024×768 image are accomplished using an ordinary PC workstation. PMID:18561670

  19. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration

    PubMed Central

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  20. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    SciTech Connect

    Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun E-mail: steve.jiang@UTSouthwestern.edu; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  1. A fast, parallel algorithm to solve the basic fluvial erosion/transport equations

    NASA Astrophysics Data System (ADS)

    Braun, J.

    2012-04-01

    Quantitative models of landform evolution are commonly based on the solution of a set of equations representing the processes of fluvial erosion, transport and deposition, which leads to predict the geometry of a river channel network and its evolution through time. The river network is often regarded as the backbone of any surface processes model (SPM) that might include other physical processes acting at a range of spatial and temporal scales along hill slopes. The basic laws of fluvial erosion requires the computation of local (slope) and non-local (drainage area) quantities at every point of a given landscape, a computationally expensive operation which limits the resolution of most SPMs. I present here an algorithm to compute the various components required in the parameterization of fluvial erosion (and transport) and thus solve the basic fluvial geomorphic equation, that is very efficient because it is O(n) (the number of required arithmetic operations is linearly proportional to the number of nodes defining the landscape), and is fully parallelizable (the computation cost decreases in a direct inverse proportion to the number of processors used to solve the problem). The algorithm is ideally suited for use on latest multi-core processors. Using this new technique, geomorphic problems can be solved at an unprecedented resolution (typically of the order of 10,000 X 10,000 nodes) while keeping the computational cost reasonable (order 1 sec per time step). Furthermore, I will show that the algorithm is applicable to any regular or irregular representation of the landform, and is such that the temporal evolution of the landform can be discretized by a fully implicit time-marching algorithm, making it unconditionally stable. I will demonstrate that such an efficient algorithm is ideally suited to produce a fully predictive SPM that links observationally based parameterizations of small-scale processes to the evolution of large-scale features of the landscapes on

  2. Fast GPU-based computation of the sensitivity matrix for a PET list-mode OSEM algorithm

    NASA Astrophysics Data System (ADS)

    Nassiri, Moulay Ali; Hissoiny, Sami; Carrier, Jean-François; Després, Philippe

    2012-10-01

    During the last decade, studies have shown that 3D list-mode ordered-subset expectation-maximization (LM-OSEM) algorithms for positron emission tomography (PET) reconstruction could be effectively computed and considerably accelerated by graphics processing unit (GPU) devices. However, most of these studies rely on pre-calculated sensitivity matrices. In many cases, the time required to compute this matrix can be longer than the reconstruction time itself. In fact, the relatively long time required for the calculation of the patient-specific sensitivity matrix is considered as one of the main obstacle in introducing a list-mode PET reconstruction algorithm for routine clinical use. The objective of this work is to accelerate a fully 3D LM-OSEM algorithm, including the calculation of the sensitivity matrix that accounts for the patient-specific attenuation and normalization corrections. For this purpose, sensitivity matrix calculations and list-mode OSEM reconstructions were implemented on GPUs, using the geometry of a commercial PET system. The system matrices were built on-the-fly by using an approach with multiple rays per detector pair. The reconstructions were performed for a volume of 188×188×57 voxels of 2×2×3.15 mm3 and for another volume of 144×144×57 voxels of 4×4×3.15 mm3. The time to compute the sensitivity matrix for the 188×188×57 array was 9 s while the LM-OSEM algorithm performed at a rate of 1.1 millions of events per second. For the 144×144×57 array, the respective numbers are 8 s for the sensitivity matrix and 0.8 million of events per second for the LM-OSEM step. This work lets envision fast reconstructions for advanced PET applications such as real time dynamic studies and parametric image reconstructions.

  3. Fast GPU-based computation of the sensitivity matrix for a PET list-mode OSEM algorithm.

    PubMed

    Nassiri, Moulay Ali; Hissoiny, Sami; Carrier, Jean-François; Després, Philippe

    2012-10-01

    During the last decade, studies have shown that 3D list-mode ordered-subset expectation-maximization (LM-OSEM) algorithms for positron emission tomography (PET) reconstruction could be effectively computed and considerably accelerated by graphics processing unit (GPU) devices. However, most of these studies rely on pre-calculated sensitivity matrices. In many cases, the time required to compute this matrix can be longer than the reconstruction time itself. In fact, the relatively long time required for the calculation of the patient-specific sensitivity matrix is considered as one of the main obstacle in introducing a list-mode PET reconstruction algorithm for routine clinical use. The objective of this work is to accelerate a fully 3D LM-OSEM algorithm, including the calculation of the sensitivity matrix that accounts for the patient-specific attenuation and normalization corrections. For this purpose, sensitivity matrix calculations and list-mode OSEM reconstructions were implemented on GPUs, using the geometry of a commercial PET system. The system matrices were built on-the-fly by using an approach with multiple rays per detector pair. The reconstructions were performed for a volume of 188 × 188 × 57 voxels of 2 × 2 × 3.15 mm(3) and for another volume of 144 × 144 × 57 voxels of 4 × 4 × 3.15 mm(3). The time to compute the sensitivity matrix for the 188 × 188 × 57 array was 9 s while the LM-OSEM algorithm performed at a rate of 1.1 millions of events per second. For the 144 × 144 × 57 array, the respective numbers are 8 s for the sensitivity matrix and 0.8 million of events per second for the LM-OSEM step. This work lets envision fast reconstructions for advanced PET applications such as real time dynamic studies and parametric image reconstructions.

  4. PSimScan: Algorithm and Utility for Fast Protein Similarity Search

    PubMed Central

    Kaznadzey, Anna; Alexandrova, Natalia; Novichkov, Vladimir; Kaznadzey, Denis

    2013-01-01

    In the era of metagenomics and diagnostics sequencing, the importance of protein comparison methods of boosted performance cannot be overstated. Here we present PSimScan (Protein Similarity Scanner), a flexible open source protein similarity search tool which provides a significant gain in speed compared to BLASTP at the price of controlled sensitivity loss. The PSimScan algorithm introduces a number of novel performance optimization methods that can be further used by the community to improve the speed and lower hardware requirements of bioinformatics software. The optimization starts at the lookup table construction, then the initial lookup table–based hits are passed through a pipeline of filtering and aggregation routines of increasing computational complexity. The first step in this pipeline is a novel algorithm that builds and selects ‘similarity zones’ aggregated from neighboring matches on small arrays of adjacent diagonals. PSimScan performs 5 to 100 times faster than the standard NCBI BLASTP, depending on chosen parameters, and runs on commodity hardware. Its sensitivity and selectivity at the slowest settings are comparable to the NCBI BLASTP’s and decrease with the increase of speed, yet stay at the levels reasonable for many tasks. PSimScan is most advantageous when used on large collections of query sequences. Comparing the entire proteome of Streptocuccus pneumoniae (2,042 proteins) to the NCBI’s non-redundant protein database of 16,971,855 records takes 6.5 hours on a moderately powerful PC, while the same task with the NCBI BLASTP takes over 66 hours. We describe innovations in the PSimScan algorithm in considerable detail to encourage bioinformaticians to improve on the tool and to use the innovations in their own software development. PMID:23505522

  5. HapCompass: A Fast Cycle Basis Algorithm for Accurate Haplotype Assembly of Sequence Data

    PubMed Central

    Aguiar, Derek

    2012-01-01

    Abstract Genome assembly methods produce haplotype phase ambiguous assemblies due to limitations in current sequencing technologies. Determining the haplotype phase of an individual is computationally challenging and experimentally expensive. However, haplotype phase information is crucial in many bioinformatics workflows such as genetic association studies and genomic imputation. Current computational methods of determining haplotype phase from sequence data—known as haplotype assembly—have difficulties producing accurate results for large (1000 genomes-type) data or operate on restricted optimizations that are unrealistic considering modern high-throughput sequencing technologies. We present a novel algorithm, HapCompass, for haplotype assembly of densely sequenced human genome data. The HapCompass algorithm operates on a graph where single nucleotide polymorphisms (SNPs) are nodes and edges are defined by sequence reads and viewed as supporting evidence of co-occurring SNP alleles in a haplotype. In our graph model, haplotype phasings correspond to spanning trees. We define the minimum weighted edge removal optimization on this graph and develop an algorithm based on cycle basis local optimizations for resolving conflicting evidence. We then estimate the amount of sequencing required to produce a complete haplotype assembly of a chromosome. Using these estimates together with metrics borrowed from genome assembly and haplotype phasing, we compare the accuracy of HapCompass, the Genome Analysis ToolKit, and HapCut for 1000 Genomes Project and simulated data. We show that HapCompass performs significantly better for a variety of data and metrics. HapCompass is freely available for download (www.brown.edu/Research/Istrail_Lab/). PMID:22697235

  6. Note: Fast imaging of DNA in atomic force microscopy enabled by a local raster scan algorithm

    SciTech Connect

    Huang, Peng; Andersson, Sean B.

    2014-06-15

    Approaches to high-speed atomic force microscopy typically involve some combination of novel mechanical design to increase the physical bandwidth and advanced controllers to take maximum advantage of the physical capabilities. For certain classes of samples, however, imaging time can be reduced on standard instruments by reducing the amount of measurement that is performed to image the sample. One such technique is the local raster scan algorithm, developed for imaging of string-like samples. Here we provide experimental results on the use of this technique to image DNA samples, demonstrating the efficacy of the scheme and illustrating the order-of-magnitude improvement in imaging time that it provides.

  7. Parrallel Implementation of Fast Randomized Algorithms for Low Rank Matrix Decomposition

    SciTech Connect

    Lucas, Andrew J.; Stalizer, Mark; Feo, John T.

    2014-03-01

    We analyze the parallel performance of randomized interpolative decomposition by de- composing low rank complex-valued Gaussian random matrices larger than 100 GB. We chose a Cray XMT supercomputer as it provides an almost ideal PRAM model permitting quick investigation of parallel algorithms without obfuscation from hardware idiosyncrasies. We obtain that on non-square matrices performance scales almost linearly with runtime about 100 times faster on 128 processors. We also verify that numerically discovered error bounds still hold on matrices two orders of magnitude larger than those previously tested.

  8. Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data

    SciTech Connect

    Chartrand, Rick

    2009-01-01

    Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

  9. A fast algorithm for calculating an expected outbreak size on dynamic contagion networks.

    PubMed

    Enright, Jessica; Kao, Rowland R

    2016-09-01

    Calculation of expected outbreak size of a simple contagion on a known contact network is a common and important epidemiological task, and is typically carried out by computationally intensive simulation. We describe an efficient exact method to calculate the expected outbreak size of a contagion on an outbreak-invariant network that is a directed and acyclic, allowing us to model all dynamically changing networks when contagion can only travel forward in time. We describe our algorithm and its use in pseudocode, as well as showing examples of its use on disease relevant, data-derived networks. PMID:27379615

  10. KD-tree based clustering algorithm for fast face recognition on large-scale data

    NASA Astrophysics Data System (ADS)

    Wang, Yuanyuan; Lin, Yaping; Yang, Junfeng

    2015-07-01

    This paper proposes an acceleration method for large-scale face recognition system. When dealing with a large-scale database, face recognition is time-consuming. In order to tackle this problem, we employ the k-means clustering algorithm to classify face data. Specifically, the data in each cluster are stored in the form of the kd-tree, and face feature matching is conducted with the kd-tree based nearest neighborhood search. Experiments on CAS-PEAL and self-collected database show the effectiveness of our proposed method.

  11. A fast smoothing algorithm for post-processing of surface reflectance spectra retrieved from airborne imaging spectrometer data.

    PubMed

    Gao, Bo-Cai; Liu, Ming

    2013-10-14

    Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented.

  12. A fast adaptive convex hull algorithm on two-dimensional processor arrays with a reconfigurable BUS system

    NASA Technical Reports Server (NTRS)

    Olariu, S.; Schwing, J.; Zhang, J.

    1991-01-01

    A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.

  13. Fast estimation of defect profiles from the magnetic flux leakage signal based on a multi-power affine projection algorithm.

    PubMed

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-01-01

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314

  14. Lamb waves based fast subwavelength imaging using a DORT-MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    He, Jiaze; Yuan, Fuh-Gwo

    2016-02-01

    A Lamb wave-based, subwavelength imaging algorithm is developed for damage imaging in large-scale, plate-like structures based on a decomposition of the time-reversal operator (DORT) method combined with the multiple signal classification (MUSIC) algorithm in the space-frequency domain. In this study, a rapid, hybrid non-contact scanning system was proposed to image an aluminum plate using a piezoelectric linear array for actuation and a laser Doppler vibrometer (LDV) line-scan for sensing. The physics of wave propagation, reflection, and scattering that underlies the response matrix in the DORT method is mathematically formulated in the context of guided waves. The singular value decomposition (SVD) and MUSIC-based imaging condition enable quantifying the damage severity by a `reflectivity' parameter and super-resolution imaging. With the flexibility of this scanning system, a considerably large area can be imaged using lower frequency Lamb waves with limited line-scans. The experimental results showed that the hardware system with a signal processing tool such as the DORT-MUSIC (TR-MUSIC) imaging technique can provide rapid, highly accurate imaging results as well as damage quantification with unknown material properties.

  15. QuickProbs—A Fast Multiple Sequence Alignment Algorithm Designed for Graphics Processors

    PubMed Central

    Gudyś, Adam; Deorowicz, Sebastian

    2014-01-01

    Multiple sequence alignment is a crucial task in a number of biological analyses like secondary structure prediction, domain searching, phylogeny, etc. MSAProbs is currently the most accurate alignment algorithm, but its effectiveness is obtained at the expense of computational time. In the paper we present QuickProbs, the variant of MSAProbs customised for graphics processors. We selected the two most time consuming stages of MSAProbs to be redesigned for GPU execution: the posterior matrices calculation and the consistency transformation. Experiments on three popular benchmarks (BAliBASE, PREFAB, OXBench-X) on quad-core PC equipped with high-end graphics card show QuickProbs to be 5.7 to 9.7 times faster than original CPU-parallel MSAProbs. Additional tests performed on several protein families from Pfam database give overall speed-up of 6.7. Compared to other algorithms like MAFFT, MUSCLE, or ClustalW, QuickProbs proved to be much more accurate at similar speed. Additionally we introduce a tuned variant of QuickProbs which is significantly more accurate on sets of distantly related sequences than MSAProbs without exceeding its computation time. The GPU part of QuickProbs was implemented in OpenCL, thus the package is suitable for graphics processors produced by all major vendors. PMID:24586435

  16. A fast algorithm to compute precise type-2 centroids for real-time control applications.

    PubMed

    Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R

    2015-02-01

    An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.

  17. Volcanic Particle Aggregation: A Fast Algorithm for the Smoluchowski Coagulation Equation

    NASA Astrophysics Data System (ADS)

    Rossi, E.; Bagheri, G.; Bonadonna, C.

    2014-12-01

    Particle aggregation is a key process that significantly affects dispersal and sedimentation of volcanic ash, with obvious implications for the associated hazards. Most theoretical studies of particle aggregation have been based on the Smoluchowski Coagulation Equation (SCE), which describes the expected time evolution of the total grain-size distribution under the hypothesis that particles can collide and stick together following specific mathematical relations (kernels). Nonetheless, the practical application of the SCE to real erupting scenarios is made extremely difficult - if not even impossible - by the large number of Ordinary Differential Equations (ODE) which have to be solved to study the typical sizes of volcanic ash (1 micron to 1 mm). We propose an algorithm to approximate the discrete solutions of the SCE, which can describe the time evolution of the total grain-size distribution of the erupted material with an increased computational efficiency. This algorithm has been applied to observed volcanic eruptions (i.e., Eyjafjallajokull 2010, Sakurajima 2013 and Mt. Saint Helens 1980) to see if the commonly used kernels can explain field data and to study how aggregation processes can modify the tephra dispersal on the ground. Different scenarios of sticking efficiencies and aggregate porosity have been used to test the sensitiveness of the SCE to these parameters. Constraints on these parameters come from field observations and laboratory experiments.

  18. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  19. Real-time MRI-guided hyperthermia treatment using a fast adaptive algorithm.

    PubMed

    Stakhursky, Vadim L; Arabe, Omar; Cheng, Kung-Shan; Macfall, James; Maccarini, Paolo; Craciunescu, Oana; Dewhirst, Mark; Stauffer, Paul; Das, Shiva K

    2009-04-01

    Magnetic resonance (MR) imaging is promising for monitoring and guiding hyperthermia treatments. The goal of this work is to investigate the stability of an algorithm for online MR thermal image guided steering and focusing of heat into the target volume. The control platform comprised a four-antenna mini-annular phased array (MAPA) applicator operating at 140 MHz (used for extremity sarcoma heating) and a GE Signa Excite 1.5 T MR system, both of which were driven by a control workstation. MR proton resonance frequency shift images acquired during heating were used to iteratively update a model of the heated object, starting with an initial finite element computed model estimate. At each iterative step, the current model was used to compute a focusing vector, which was then used to drive the next iteration, until convergence. Perturbation of the driving vector was used to prevent the process from stalling away from the desired focus. Experimental validation of the performance of the automatic treatment platform was conducted with two cylindrical phantom studies, one homogeneous and one muscle equivalent with tumor tissue (conductivity 50% higher) inserted, with initial focal spots being intentionally rotated 90 degrees and 50 degrees away from the desired focus, mimicking initial setup errors in applicator rotation. The integrated MR-HT treatment platform steered the focus of heating into the desired target volume in two quite different phantom tissue loads which model expected patient treatment configurations. For the homogeneous phantom test where the target was intentionally offset by 90 degrees rotation of the applicator, convergence to the proper phase focus in the target occurred after 16 iterations of the algorithm. For the more realistic test with a muscle equivalent phantom with tumor inserted with 50 degrees applicator displacement, only two iterations were necessary to steer the focus into the tumor target. Convergence improved the heating efficacy (the

  20. Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays

    PubMed Central

    Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin

    2016-01-01

    In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301

  1. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior.

    PubMed

    Zhu, Qingsong; Mai, Jiaming; Shao, Ling

    2015-11-01

    Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we propose a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.

  2. A fast optimization transfer algorithm for image inpainting in wavelet domains.

    PubMed

    Chan, Raymond H; Wen, You-Wei; Yip, Andy M

    2009-07-01

    A wavelet inpainting problem refers to the problem of filling in missing wavelet coefficients in an image. A variational approach was used by Chan et al. The resulting functional was minimized by the gradient descent method. In this paper, we use an optimization transfer technique which involves replacing their univariate functional by a bivariate functional by adding an auxiliary variable. Our bivariate functional can be minimized easily by alternating minimization: for the auxiliary variable, the minimum has a closed form solution, and for the original variable, the minimization problem can be formulated as a classical total variation (TV) denoising problem and, hence, can be solved efficiently using a dual formulation. We show that our bivariate functional is equivalent to the original univariate functional. We also show that our alternating minimization is convergent. Numerical results show that the proposed algorithm is very efficient and outperforms that of Chan et al.

  3. Fast algorithms for visualizing fluid motion in steady flow on unstructured grids

    NASA Technical Reports Server (NTRS)

    Ueng, S. K.; Sikorski, K.; Ma, Kwan-Liu

    1995-01-01

    The plotting of streamlines is an effective way of visualizing fluid motion in steady flows. Additional information about the flowfield, such as local rotation and expansion, can be shown by drawing in the form of a ribbon or tube. In this paper, we present efficient algorithms for the construction of streamlines, streamribbons and streamtubes on unstructured grids. A specialized version of the Runge-Kutta method has been developed to speed up the integration of particle paths. We have also derived closed-form solutions for calculating angular rotation rate and radius to construct streamribbons and streamtubes, respectively. According to our analysis and test results, these formulations are two to four times better in performance than previous numerical methods. As a large number of traces are calculated, the improved performance could be significant.

  4. A Fast Semiautomatic Algorithm for Centerline-Based Vocal Tract Segmentation

    PubMed Central

    Poznyakovskiy, Anton A.; Mainka, Alexander; Platzek, Ivan; Mürbe, Dirk

    2015-01-01

    Vocal tract morphology is an important factor in voice production. Its analysis has potential implications for educational matters as well as medical issues like voice therapy. The knowledge of the complex adjustments in the spatial geometry of the vocal tract during phonation is still limited. For a major part, this is due to difficulties in acquiring geometry data of the vocal tract in the process of voice production. In this study, a centerline-based segmentation method using active contours was introduced to extract the geometry data of the vocal tract obtained with MRI during sustained vowel phonation. The applied semiautomatic algorithm was found to be time- and interaction-efficient and allowed performing various three-dimensional measurements on the resulting model. The method is suitable for an improved detailed analysis of the vocal tract morphology during speech or singing which might give some insights into the underlying mechanical processes. PMID:26557710

  5. A Fast Semiautomatic Algorithm for Centerline-Based Vocal Tract Segmentation.

    PubMed

    Poznyakovskiy, Anton A; Mainka, Alexander; Platzek, Ivan; Mürbe, Dirk

    2015-01-01

    Vocal tract morphology is an important factor in voice production. Its analysis has potential implications for educational matters as well as medical issues like voice therapy. The knowledge of the complex adjustments in the spatial geometry of the vocal tract during phonation is still limited. For a major part, this is due to difficulties in acquiring geometry data of the vocal tract in the process of voice production. In this study, a centerline-based segmentation method using active contours was introduced to extract the geometry data of the vocal tract obtained with MRI during sustained vowel phonation. The applied semiautomatic algorithm was found to be time- and interaction-efficient and allowed performing various three-dimensional measurements on the resulting model. The method is suitable for an improved detailed analysis of the vocal tract morphology during speech or singing which might give some insights into the underlying mechanical processes.

  6. Sequential quadratic programming-based fast path planning algorithm subject to no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang

    2016-08-01

    Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.

  7. Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays.

    PubMed

    Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin

    2016-01-01

    In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301

  8. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network.

    PubMed

    Le, Trong-Ngoc; Bao, Pham The; Huynh, Hieu Trung

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively. PMID:27597960

  9. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

    PubMed Central

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the “ground truth.” Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively. PMID:27597960

  10. Optimal design of groundwater remediation systems using a probabilistic multi-objective fast harmony search algorithm under uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, Q.; Wu, J.; Qian, J.

    2013-12-01

    This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation system under uncertainty associated with the hydraulic conductivity of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic Pareto domination ranking and probabilistic niche technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient hydraulic conductivity data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal groundwater remediation system of a two-dimensional hypothetical test problem involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the percentage of mass remaining in the aquifer at the end of the operational period, which uses the Pump-and-Treat (PAT) technology to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is used to demonstrate the effectiveness of the proposed methodology. The MC analysis is taken to each Pareto solutions for every K realization. Then the statistical mean and the upper and lower bounds of uncertainty intervals of 95% confidence level are calculated. The MC analysis results show that all of the Pareto-optimal solutions are located between the upper and lower bounds of the MC analysis. Moreover, the root mean square errors (RMSEs) between the Pareto-optimal solutions by the PMOFHS and the average values of optimal solutions by the MC analysis are 0.0204 for the first objective and 0.0318 for the second objective, quite smaller than those RMSEs between the results by the existing probabilistic multi-objective genetic algorithm (PMOGA) and the MC analysis, 0.0384 and 0.0397, respectively. In

  11. A fast SCOP fold classification system using content-based E-Predict algorithm

    PubMed Central

    Chi, Pin-Hao; Shyu, Chi-Ren; Xu, Dong

    2006-01-01

    Background Domain experts manually construct the Structural Classification of Protein (SCOP) database to categorize and compare protein structures. Even though using the SCOP database is believed to be more reliable than classification results from other methods, it is labor intensive. To mimic human classification processes, we develop an automatic SCOP fold classification system to assign possible known SCOP folds and recognize novel folds for newly-discovered proteins. Results With a sufficient amount of ground truth data, our system is able to assign the known folds for newly-discovered proteins in the latest SCOP v1.69 release with 92.17% accuracy. Our system also recognizes the novel folds with 89.27% accuracy using 10 fold cross validation. The average response time for proteins with 500 and 1409 amino acids to complete the classification process is 4.1 and 17.4 seconds, respectively. By comparison with several structural alignment algorithms, our approach outperforms previous methods on both the classification accuracy and efficiency. Conclusion In this paper, we build an advanced, non-parametric classifier to accelerate the manual classification processes of SCOP. With satisfactory ground truth data from the SCOP database, our approach identifies relevant domain knowledge and yields reasonably accurate classifications. Our system is publicly accessible at . PMID:16872501

  12. A fast reconstruction algorithm for bioluminescence tomography based on smoothed l0 norm regularization

    NASA Astrophysics Data System (ADS)

    He, Xiaowei; Yu, Jingjing; Geng, Guohua; Guo, Hongbo

    2013-10-01

    As an important optical molecular imaging technique, bioluminescence tomography (BLT) offers an inexpensive and sensitive means for non-invasively imaging a variety of physiological and pathological activities at cellular and molecular levels in living small animals. The key problem of BLT is to recover the distribution of the internal bioluminescence sources from limited measurements on the surface. Considering the sparsity of the light source distribution, we directly formulate the inverse problem of BLT into an l0-norm minimization model and present a smoothed l0-norm (SL0) based reconstruction algorithm. By approximating the discontinuous l0 norm with a suitable continuous function, the SL0 norm method solves the problem of intractable computational load of the minimal l0 search as well as high sensitivity of l0-norm to noise. Numerical experiments on a mouse atlas demonstrate that the proposed SL0 norm based reconstruction method can obtain whole domain reconstruction without any a priori knowledge of the source permissible region, yielding almost the same reconstruction results to those of l1 norm methods.

  13. Development of fast line scanning imaging algorithm for diseased chicken detection

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.

    2005-11-01

    A hyperspectral line-scan imaging system for automated inspection of wholesome and diseased chickens was developed and demonstrated. The hyperspectral imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph. The system used a spectrograph to collect spectral measurements across a pixel-wide vertical linear field of view through which moving chicken carcasses passed. After a series of image calibration procedures, the hyperspectral line-scan images were collected for chickens on a laboratory simulated processing line. From spectral analysis, four key wavebands for differentiating between wholesome and systemically diseased chickens were selected: 413 nm, 472 nm, 515 nm, and 546 nm, and a reference waveband, 622 nm. The ratio of relative reflectance between each key wavelength and the reference wavelength was calculated as an image feature. A fuzzy logic-based algorithm utilizing the key wavebands was developed to identify individual pixels on the chicken surface exhibiting symptoms of systemic disease. Two differentiation methods were built to successfully differentiate 72 systemically diseased chickens from 65 wholesome chickens.

  14. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  15. A Fast Algorithm for Automatic Detection of Ionospheric Disturbances Using GPS Slant Total Electron Content Data

    NASA Astrophysics Data System (ADS)

    Efendi, Emre; Arikan, Feza; Yarici, Aysenur

    2016-07-01

    Solar, geomagnetic, gravitational and seismic activities cause disturbances in the ionospheric region of upper atmosphere for space based communication, navigation and positioning systems. These disturbances can be categorized with respect to their amplitude, duration and frequency. Typically in the literature, ionospheric disturbances are investigated with gradient based methods on Total Electron Content (TEC) data estimated from ground based dual frequency Global Positioning System (GPS) receivers. In this study, a detection algorithm is developed to determine the variability in Slant TEC (STEC) data. The developed method, namely Differential Rate of TEC (DRoT), is based on Rate of Tec (RoT) method that is widely used in the literature. RoT is usually applied to Vertical TEC (VTEC) and it can be defined as normalized derivative of VTEC. Unfortunately, the resultant data obtained from the application of RoT on VTEC suffer from inaccuracies due to mapping function and the resultant values are very noisy which make it difficult to automatically detect the disturbance due to variability in the ionosphere. The developed DRoT method can be defined as the normalized metric norm (L2) between the RoT and its baseband trend structure. In this study, the error performance of DRoT is determined using synthetic data with variable bounds on the parameter set of amplitude, frequency and period of disturbance. It is observed that DRoT method can detect disturbances in three categories. For DRoT values less than 50%, there is no significant disturbance in STEC data. For DRoT values between 50 to 70 %, a medium scale disturbance can be observed. For DROT values over 70 %, severe disturbances such Large Scale Travelling Ionospheric Disturbances (TID) or plasma bubbles can be observed. When DRoT is applied to the GPS-STECdata for stations in high latitude, equatorial and mid-latitude regions, it is observed that disturbances with amplitudes larger than 10% of the difference between

  16. Simulations of emissivity in passive microwave remote sensing with three-dimensional numerical solutions of Maxwell equations and fast algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Lin

    In the first part of the work, we developed coding for large-scale computation to solve 3-dimensional microwave scattering problem. Maxwell integral equations are solved by using MoM with RWG basis functions in conjunction with fast computation algorithms. The cost-effective solutions of parallel and distributed simulation were implemented on a low cost PC cluster, which consists of 32 processors connected to a fast Ethernet switch. More than a million of surface current unknowns were solved at unprecedented speeds. Accurate simulations of emissivities and bistatic coefficients from ocean and soil were achieved. Exponential correlation function and ocean spectrum are implementd for generating soil and ocean surfaces. They have fine scale features with large rms slope. The results were justified by comparison with numerical results from original code, which is based on pulse basis function, and from analytic methods like SPM, and also with experiments. In the second part of the work, fully polarimetric microwave emissions from wind-generated foam-covered ocean surfaces were investigated. The foam is treated as densely packed air bubbles coated with thin seawater coating. The absorption, scattering and extinction coefficients were calculated by Monte Carlo simulations of solutionsof Maxwell equations of a collection of coated particles. The effects of boundary roughness of ocean surfaces were included by using the second-order small perturbation method (SPM) describing the reflection coefficients between foam and ocean. An empirical wave-number spectrum was used to represent the small-scale wind-generated sea surfaces. The theoretical results of four Stokes brightness temperatures with typical parameters of foam in passive remote sensing at 10.8 GHz, 19.0 GHz and 36.5 GHz were illustrated. The azimuth variations of polarimetric brightness temperature were calculated. Emission with various wind speed and foam layer thickness was studied. The results were also compared

  17. A finite rate of innovation algorithm for fast and accurate spike detection from two-photon calcium imaging

    NASA Astrophysics Data System (ADS)

    Oñativia, Jon; Schultz, Simon R.; Dragotti, Pier Luigi

    2013-08-01

    Objective. Inferring the times of sequences of action potentials (APs) (spike trains) from neurophysiological data is a key problem in computational neuroscience. The detection of APs from two-photon imaging of calcium signals offers certain advantages over traditional electrophysiological approaches, as up to thousands of spatially and immunohistochemically defined neurons can be recorded simultaneously. However, due to noise, dye buffering and the limited sampling rates in common microscopy configurations, accurate detection of APs from calcium time series has proved to be a difficult problem. Approach. Here we introduce a novel approach to the problem making use of finite rate of innovation (FRI) theory (Vetterli et al 2002 IEEE Trans. Signal Process. 50 1417-28). For calcium transients well fit by a single exponential, the problem is reduced to reconstructing a stream of decaying exponentials. Signals made of a combination of exponentially decaying functions with different onset times are a subclass of FRI signals, for which much theory has recently been developed by the signal processing community. Main results. We demonstrate for the first time the use of FRI theory to retrieve the timing of APs from calcium transient time series. The final algorithm is fast, non-iterative and parallelizable. Spike inference can be performed in real-time for a population of neurons and does not require any training phase or learning to initialize parameters. Significance. The algorithm has been tested with both real data (obtained by simultaneous electrophysiology and multiphoton imaging of calcium signals in cerebellar Purkinje cell dendrites), and surrogate data, and outperforms several recently proposed methods for spike train inference from calcium imaging data.

  18. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGES

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  19. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    SciTech Connect

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics. In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.

  20. A finite rate of innovation algorithm for fast and accurate spike detection from two-photon calcium imaging

    PubMed Central

    Oñativia, Jon; Schultz, Simon R; Dragotti, Pier Luigi

    2014-01-01

    Objective Inferring the times of sequences of action potentials (APs) (spike trains) from neurophysiological data is a key problem in computational neuroscience. The detection of APs from two-photon imaging of calcium signals offers certain advantages over traditional electrophysiological approaches, as up to thousands of spatially and immunohistochemically defined neurons can be recorded simultaneously. However, due to noise, dye buffering and the limited sampling rates in common microscopy configurations, accurate detection of APs from calcium time series has proved to be a difficult problem. Approach Here we introduce a novel approach to the problem making use of finite rate of innovation (FRI) theory (Vetterli et al 2002 IEEE Trans. Signal Process. 50 1417–28). For calcium transients well fit by a single exponential, the problem is reduced to reconstructing a stream of decaying exponentials. Signals made of a combination of exponentially decaying functions with different onset times are a subclass of FRI signals, for which much theory has recently been developed by the signal processing community. Main results We demonstrate for the first time the use of FRI theory to retrieve the timing of APs from calcium transient time series. The final algorithm is fast, non-iterative and parallelizable. Spike inference can be performed in real-time for a population of neurons and does not require any training phase or learning to initialize parameters. Significance The algorithm has been tested with both real data (obtained by simultaneous electrophysiology and multiphoton imaging of calcium signals in cerebellar Purkinje cell dendrites), and surrogate data, and outperforms several recently proposed methods for spike train inference from calcium imaging data. PMID:23860257

  1. A Fetal Electrocardiogram Signal Extraction Algorithm Based on Fast One-Unit Independent Component Analysis with Reference

    PubMed Central

    2016-01-01

    Fetal electrocardiogram (FECG) extraction is very important procedure for fetal health assessment. In this article, we propose a fast one-unit independent component analysis with reference (ICA-R) that is suitable to extract the FECG. Most previous ICA-R algorithms only focused on how to optimize the cost function of the ICA-R and payed little attention to the improvement of cost function. They did not fully take advantage of the prior information about the desired signal to improve the ICA-R. In this paper, we first use the kurtosis information of the desired FECG signal to simplify the non-Gaussian measurement function and then construct a new cost function by directly using a nonquadratic function of the extracted signal to measure its non-Gaussianity. The new cost function does not involve the computation of the difference between the function of the Gaussian random vector and that of the extracted signal, which is time consuming. Centering and whitening are also used to preprocess the observed signal to further reduce the computation complexity. While the proposed method has the same error performance as other improved one-unit ICA-R methods, it actually has lower computation complexity than those other methods. Simulations are performed separately on artificial and real-world electrocardiogram signals. PMID:27703492

  2. Fast mode decision algorithm in MPEG-2 to H.264/AVC transcoding including group of picture structure conversion

    NASA Astrophysics Data System (ADS)

    Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang

    2009-05-01

    The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.

  3. fast-matmul

    SciTech Connect

    Grey Ballard, Austin Benson

    2014-11-26

    This software provides implementations of fast matrix multiplication algorithms. These algorithms perform fewer floating point operations than the classical cubic algorithm. The software uses code generation to automatically implement the fast algorithms based on high-level descriptions. The code serves two general purposes. The first is to demonstrate that these fast algorithms can out-perform vendor matrix multiplication algorithms for modest problem sizes on a single machine. The second is to rapidly prototype many variations of fast matrix multiplication algorithms to encourage future research in this area. The implementations target sequential and shared memory parallel execution.

  4. Optimization of the operational parameters in a fast axial flow CW CO 2 laser using artificial neural networks and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Adineh, V. R.; Aghanajafi, C.; Dehghan, G. H.; Jelvani, S.

    2008-11-01

    This paper presents an artificial intelligence approach for optimization of the operational parameters such as gas pressure ratio and discharge current in a fast-axial-flow CW CO 2 laser by coupling artificial neural networks and genetic algorithm. First, a series of experiments were used as the learning data for artificial neural networks. The best-trained network was connected to genetic algorithm as a fitness function to find the optimum parameters. After the optimization, the calculated laser power increases by 33% and the measured value increases by 21% in an experiment as compared to a non-optimized case.

  5. Assessment of visual quality and spatial accuracy of fast anisotropic diffusion and scan conversion algorithms for real-time three-dimensional spherical ultrasound

    NASA Astrophysics Data System (ADS)

    Duan, Qi; Angelini, Elsa D.; Laine, Andrew

    2004-04-01

    Three-dimensional ultrasound machines based on matrix phased-array transducers are gaining predominance for real-time dynamic screening in cardiac and obstetric practice. These transducers array acquire three-dimensional data in spherical coordinates along lines tiled in azimuth and elevation angles at incremental depth. This study aims at evaluating fast filtering and scan conversion algorithms applied in the spherical domain prior to visualization into Cartesian coordinates for visual quality and spatial measurement accuracy. Fast 3d scan conversion algorithms were implemented and with different order interpolation kernels. Downsizing and smoothing of sampling artifacts were integrated in the scan conversion process. In addition, a denoising scheme for spherical coordinate data with 3d anisotropic diffusion was implemented and applied prior to scan conversion to improve image quality. Reconstruction results under different parameter settings, such as different interpolation kernels, scaling factor, smoothing options, and denoising, are reported. Image quality was evaluated on several data sets via visual inspections and measurements of cylinder objects dimensions. Error measurements of the cylinder's radius, reported in this paper, show that the proposed fast scan conversion algorithm can correctly reconstruct three-dimensional ultrasound in Cartesian coordinates under tuned parameter settings. Denoising via three-dimensional anisotropic diffusion was able to greatly improve the quality of resampled data without affecting the accuracy of spatial information after the modification of the introduction of a variable gradient threshold parameter.

  6. A Fast and Portable Reimplementation of Piskunov and Valenti's Optimal-Extraction Algorithm with Improved Cosmic-Ray Removal and Optimal Sky Subtraction

    NASA Astrophysics Data System (ADS)

    Ritter, A.; Hyde, E. A.; Parker, Q. A.

    2014-02-01

    We present a fast and portable reimplementation of Piskunov and Valenti's optimal-extraction algorithm (Piskunov & Valenti 2002) in C/C++ together with full uncertainty propagation, improved cosmic-ray removal, and an optimal background-subtraction algorithm. This reimplementation can be used with IRAF and most existing data-reduction packages and leads to signal-to-noise ratios close to the Poisson limit. The algorithm is very stable, operates on spectra from a wide range of instruments (slit spectra and fibre feeds), and has been extensively tested for VLT/UVES, ESO/CES, ESO/FEROS, NTT/EMMI, NOT/ALFOSC, STELLA/SES, SSO/WiFeS, and finally, P60/SEDM-IFU data.

  7. fast-matmul

    2014-11-26

    This software provides implementations of fast matrix multiplication algorithms. These algorithms perform fewer floating point operations than the classical cubic algorithm. The software uses code generation to automatically implement the fast algorithms based on high-level descriptions. The code serves two general purposes. The first is to demonstrate that these fast algorithms can out-perform vendor matrix multiplication algorithms for modest problem sizes on a single machine. The second is to rapidly prototype many variations of fastmore » matrix multiplication algorithms to encourage future research in this area. The implementations target sequential and shared memory parallel execution.« less

  8. Photometric selection of quasars in large astronomical data sets with a fast and accurate machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2014-03-01

    Future astronomical surveys will produce data on ˜108 objects per night. In order to characterize and classify these sources, we will require algorithms that scale linearly with the size of the data, that can be easily parallelized and where the speedup of the parallel algorithm will be linear in the number of processing cores. In this paper, we present such an algorithm and apply it to the question of colour selection of quasars. We use non-parametric Bayesian classification and a binning algorithm implemented with hash tables (BASH tables). We show that this algorithm's run time scales linearly with the number of test set objects and is independent of the number of training set objects. We also show that it has the same classification accuracy as other algorithms. For current data set sizes, it is up to three orders of magnitude faster than commonly used naive kernel-density-estimation techniques and it is estimated to be about eight times faster than the current fastest algorithm using dual kd-trees for kernel density estimation. The BASH table algorithm scales linearly with the size of the test set data only, and so for future larger data sets, it will be even faster compared to other algorithms which all depend on the size of the test set and the size of the training set. Since it uses linear data structures, it is easier to parallelize compared to tree-based algorithms and its speedup is linear in the number of cores unlike tree-based algorithms whose speedup plateaus after a certain number of cores. Moreover, due to the use of hash tables to implement the binning, the memory usage is very small. While our analysis is for the specific problem of selection of quasars, the ideas are general and the BASH table algorithm can be applied to any density-estimation problem involving sparse high-dimensional data sets. Since sparse high-dimensional data sets are a common type of scientific data set, this method has the potential to be useful in a broad range of

  9. Fast adaptive OFDM-PON over single fiber loopback transmission using dynamic rate adaptation-based algorithm for channel performance improvement

    NASA Astrophysics Data System (ADS)

    Kartiwa, Iwa; Jung, Sang-Min; Hong, Moon-Ki; Han, Sang-Kook

    2014-03-01

    In this paper, we propose a novel fast adaptive approach that was applied to an OFDM-PON 20-km single fiber loopback transmission system to improve channel performance in term of stabilized BER below 2 × 10-3 and higher throughput beyond 10 Gb/s. The upstream transmission is performed through light source-seeded modulation using 1-GHz RSOA at the ONU. Experimental results indicated that the dynamic rate adaptation algorithm based on greedy Levin-Campello could be an effective solution to mitigate channel instability and data rate degradation caused by the Rayleigh back scattering effect and inefficient resource subcarrier allocation.

  10. Fast and optimal multiframe blind deconvolution algorithm for high-resolution ground-based imaging of space objects.

    PubMed

    Matson, Charles L; Borelli, Kathy; Jefferies, Stuart; Beckner, Charles C; Hege, E Keith; Lloyd-Hart, Michael

    2009-01-01

    We report a multiframe blind deconvolution algorithm that we have developed for imaging through the atmosphere. The algorithm has been parallelized to a significant degree for execution on high-performance computers, with an emphasis on distributed-memory systems so that it can be hosted on commodity clusters. As a result, image restorations can be obtained in seconds to minutes. We have compared and quantified the quality of its image restorations relative to the associated Cramér-Rao lower bounds (when they can be calculated). We describe the algorithm and its parallelization in detail, demonstrate the scalability of its parallelization across distributed-memory computer nodes, discuss the results of comparing sample variances of its output to the associated Cramér-Rao lower bounds, and present image restorations obtained by using data collected with ground-based telescopes.

  11. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  12. Fast exact algorithms for the closest string and substring problems with application to the planted (L, d)-motif model.

    PubMed

    Chen, Zhi-Zhong; Wang, Lusheng

    2011-01-01

    We present two parameterized algorithms for the closest string problem. The first runs in O(nL + nd · 17.97d) time for DNA strings and in O(nL + nd · 61.86d) time for protein strings, where n is the number of input strings, L is the length of each input string, and d is the given upper bound on the number of mismatches between the center string and each input string. The second runs in O(nL + nd · 13.92d) time for DNA strings and in O(nL + nd · 47.21d) time for protein strings. We then extend the first algorithm to a new parameterized algorithm for the closest substring problem that runs in O((n - 1)m2(L + d · 17.97d · m[log2(d+1)])) time for DNA strings and in O((n - 1)m2(L + d · 61.86d · m[log2(d+1)])) time for protein strings, where n is the number of input strings, L is the length of the center substring, L - 1 + m is the maximum length of a single input string, and d is the given upper bound on the number of mismatches between the center substring and at least one substring of each input string. All the algorithms significantly improve the previous bests. To verify experimentally the theoretical improvements in the time complexity, we implement our algorithm in C and apply the resulting program to the planted (L, d)-motif problem proposed by Pevzner and Sze in 2000. We compare our program with the previously best exact program for the problem, namely PMSPrune (designed by Davila et al. in 2007). Our experimental data show that our program runs faster for practical cases and also for several challenging cases. Our algorithm uses less memory too.

  13. Proof of uniform sampling of binary matrices with fixed row sums and column sums for the fast Curveball algorithm

    NASA Astrophysics Data System (ADS)

    Carstens, C. J.

    2015-04-01

    Randomization of binary matrices has become one of the most important quantitative tools in modern computational biology. The equivalent problem of generating random directed networks with fixed degree sequences has also attracted a lot of attention. However, it is very challenging to generate truly unbiased random matrices with fixed row and column sums. Strona et al. [Nat. Commun. 5, 4114 (2014), 10.1038/ncomms5114] introduce the innovative Curveball algorithm and give numerical support for the proposition that it generates truly random matrices. In this paper, we present a rigorous proof of convergence to the uniform distribution. Furthermore, we show the Curveball algorithm must include certain failed trades to ensure uniform sampling.

  14. Base calling for high-throughput short-read sequencing: dynamic programming solutions

    PubMed Central

    2013-01-01

    Background Next-generation DNA sequencing platforms are capable of generating millions of reads in a matter of days at rapidly reducing costs. Despite its proliferation and technological improvements, the performance of next-generation sequencing remains adversely affected by the imperfections in the underlying biochemical and signal acquisition procedures. To this end, various techniques, including statistical methods, are used to improve read lengths and accuracy of these systems. Development of high performing base calling algorithms that are computationally efficient and scalable is an ongoing challenge. Results We develop model-based statistical methods for fast and accurate base calling in Illumina’s next-generation sequencing platforms. In particular, we propose a computationally tractable parametric model which enables dynamic programming formulation of the base calling problem. Forward-backward and soft-output Viterbi algorithms are developed, and their performance and complexity are investigated and compared with the existing state-of-the-art base calling methods for this platform. A C code implementation of our algorithm named Softy can be downloaded from https://sourceforge.net/projects/dynamicprog. Conclusion We demonstrate high accuracy and speed of the proposed methods on reads obtained using Illumina’s Genome Analyzer II and HiSeq2000. In addition to performing reliable and fast base calling, the developed algorithms enable incorporation of prior knowledge which can be utilized for parameter estimation and is potentially beneficial in various downstream applications. PMID:23586484

  15. Application of a fast and efficient algorithm to assess landslide-prone areas in sensitive clays in Sweden

    NASA Astrophysics Data System (ADS)

    Melchiorre, C.; Tryggvason, A.

    2015-12-01

    We refine and test an algorithm for landslide susceptibility assessment in areas with sensitive clays. The algorithm uses soil data and digital elevation models to identify areas which may be prone to landslides and has been applied in Sweden for several years. The algorithm is very computationally efficient and includes an intelligent filtering procedure for identifying and removing small-scale artifacts in the hazard maps produced. Where information on bedrock depth is available, this can be included in the analysis, as can information on several soil-type-based cross-sectional angle thresholds for slip. We evaluate how processing choices such as of filtering parameters, local cross-sectional angle thresholds, and inclusion of bedrock depth information affect model performance. The specific cross-sectional angle thresholds used were derived by analyzing the relationship between landslide scarps and the quick-clay susceptibility index (QCSI). We tested the algorithm in the Göta River valley. Several different verification measures were used to compare results with observed landslides and thereby identify the optimal algorithm parameters. Our results show that even though a relationship between the cross-sectional angle threshold and the QCSI could be established, no significant improvement of the overall modeling performance could be achieved by using these geographically specific, soil-based thresholds. Our results indicate that lowering the cross-sectional angle threshold from 1 : 10 (the general value used in Sweden) to 1 : 13 improves results slightly. We also show that an application of the automatic filtering procedure that removes areas initially classified as prone to landslides not only removes artifacts and makes the maps visually more appealing, but it also improves the model performance.

  16. Application of a fast and efficient algorithm to detect areas with prerequisites for landslide in sensitive clays, Göta Älv, Sweden

    NASA Astrophysics Data System (ADS)

    Melchiorre, Caterina; Tryggvason, Ari

    2014-05-01

    In Sweden, landslide stability maps are based on the recognition of topographical and soil conditions. The topographical criterion is based on the ratio between height of the slope and its length. The calculation of this cross-sectional angle is straight forward in one dimension, but slightly more complicated in two dimensions and very computationally expensive in a GIS environment. We present an application of a fast and efficient computer algorithm based on slope and soil criteria in Göta Älv, southwest Sweden. The algorithm, compared to other software implementations of the cross-sectional angle criterion, guarantees a fast execution, the possibility to insert several threshold values of the cross-sectional angle and the use of information on bedrock elevation. As input maps we used a 1:50000 Quaternary soil map, a DEM at 2x2 m pixel resolution, and a bedrock elevation map. We used two sets of cross-sectional angle thresholds, the first one derived from stability calculation and the second one assessed through the relationship between QCSI (i.e., estimated value of the sensitivity) and the cross-sectional angle calculated from the landslide scar database. A comparison between the results of the algorithm using or not using the bedrock information was also performed. The produced maps were validated by using the landslide scar database and a hazard map. The results show that the use of bedrock information decreases the calculated areas with prerequisites for landslides, whereas not decreasing the performance of the algorithm. The maps produced by using the two different sets of cross-angle thresholds are very similar and show similar results in the validation. This means that it would be possible to extent this methodology in areas without geotechnical information by using less expensive data such as the QCSI. Moreover, the use of several cross-sectional angle thresholds is not possible in other software implementations available at the moment. This means that

  17. A fast algorithm for non-Newtonian flow. An enhanced particle-tracking finite element code for solving boundary-valve problems in viscoelastic flow

    NASA Astrophysics Data System (ADS)

    Malkus, David S.

    1989-01-01

    This project concerned the development of a new fast finite element algorithm to solve flow problems of non-Newtonian fluids such as solutions or melts of polymers. Many constitutive theories for such materials involve single integrals over the deformation history of the particle at the stress evaluation point; examples are the Doi-Edwards and Curtiss-Bird molecular theories and the BKZ family derived from continuum arguments. These theories are believed to be among the most accurate in describing non-Newtonian effects important to polymer process design, effects such as stress relaxation, shear thinning, and normal stress effects. This research developed an optimized version of the algorithm which would run a factor of two faster than the pilot algorithm on scalar machines and would be able to take full advantage of vectorization on machines. Significant progress was made in code vectorization; code enhancement and streamlining; adaptive memory quadrature; model problems for the High Weissenberg Number Problem; exactly incompressible projection; development of multimesh extrapolation procedures; and solution of problems of physical interest. A portable version of the code is in the final stages of benchmarking and testing. It interfaces with the widely used FIDAP fluid dynamics package.

  18. Fast imputation using medium- or low-coverage sequence data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Direct imputation from raw sequence reads can be more accurate than calling genotypes first and then imputing, especially if read depth is low or error rates high, but different imputation strategies are required than those used for data from genotyping chips. A fast algorithm to impute from lower t...

  19. Genetic algorithm based fast alignment method for strap-down inertial navigation system with large azimuth misalignment.

    PubMed

    He, Hongyang; Xu, Jiangning; Qin, Fangjun; Li, Feng

    2015-11-01

    In order to shorten the alignment time and eliminate the small initial misalignment limit for compass alignment of strap-down inertial navigation system (SINS), which is sometimes not easy to satisfy when the ship is moored or anchored, an optimal model based time-varying parameter compass alignment algorithm is proposed in this paper. The contributions of the work presented here are twofold. First, the optimization of compass alignment parameters, which involves a lot of trial-and-error traditionally, is achieved based on genetic algorithm. On this basis, second, the optimal parameter varying model is established by least-square polynomial fitting. Experiments are performed with a navigational grade fiber optical gyroscope SINS, which validate the efficiency of the proposed method. PMID:26628165

  20. Genetic algorithm based fast alignment method for strap-down inertial navigation system with large azimuth misalignment

    NASA Astrophysics Data System (ADS)

    He, Hongyang; Xu, Jiangning; Qin, Fangjun; Li, Feng

    2015-11-01

    In order to shorten the alignment time and eliminate the small initial misalignment limit for compass alignment of strap-down inertial navigation system (SINS), which is sometimes not easy to satisfy when the ship is moored or anchored, an optimal model based time-varying parameter compass alignment algorithm is proposed in this paper. The contributions of the work presented here are twofold. First, the optimization of compass alignment parameters, which involves a lot of trial-and-error traditionally, is achieved based on genetic algorithm. On this basis, second, the optimal parameter varying model is established by least-square polynomial fitting. Experiments are performed with a navigational grade fiber optical gyroscope SINS, which validate the efficiency of the proposed method.

  1. BRIDES: A New Fast Algorithm and Software for Characterizing Evolving Similarity Networks Using Breakthroughs, Roadblocks, Impasses, Detours, Equals and Shortcuts

    PubMed Central

    Lord, Etienne; Le Cam, Margaux; Bapteste, Éric; Méheust, Raphaël; Makarenkov, Vladimir; Lapointe, François-Joseph

    2016-01-01

    Various types of genome and gene similarity networks along with their characteristics have been increasingly used for retracing different kinds of evolutionary and ecological relationships. Here, we present a new polynomial time algorithm and the corresponding software (BRIDES) to provide characterization of different types of paths existing in evolving (or augmented) similarity networks under the constraint that such paths contain at least one node that was not present in the original network. These different paths are denoted as Breakthroughs, Roadblocks, Impasses, Detours, Equal paths, and Shortcuts. The analysis of their distribution can allow discriminating among different evolutionary hypotheses concerning genomes or genes at hand. Our approach is based on an original application of the popular shortest path Dijkstra’s and Yen’s algorithms. The C++ and R versions of the BRIDES program are freely available at: https://github.com/etiennelord/BRIDES. PMID:27580188

  2. BRIDES: A New Fast Algorithm and Software for Characterizing Evolving Similarity Networks Using Breakthroughs, Roadblocks, Impasses, Detours, Equals and Shortcuts.

    PubMed

    Lord, Etienne; Le Cam, Margaux; Bapteste, Éric; Méheust, Raphaël; Makarenkov, Vladimir; Lapointe, François-Joseph

    2016-01-01

    Various types of genome and gene similarity networks along with their characteristics have been increasingly used for retracing different kinds of evolutionary and ecological relationships. Here, we present a new polynomial time algorithm and the corresponding software (BRIDES) to provide characterization of different types of paths existing in evolving (or augmented) similarity networks under the constraint that such paths contain at least one node that was not present in the original network. These different paths are denoted as Breakthroughs, Roadblocks, Impasses, Detours, Equal paths, and Shortcuts. The analysis of their distribution can allow discriminating among different evolutionary hypotheses concerning genomes or genes at hand. Our approach is based on an original application of the popular shortest path Dijkstra's and Yen's algorithms. The C++ and R versions of the BRIDES program are freely available at: https://github.com/etiennelord/BRIDES. PMID:27580188

  3. Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path.

    PubMed

    Herráez, Miguel Arevallilo; Burton, David R; Lalor, Michael J; Gdeisat, Munther A

    2002-12-10

    We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples. PMID:12502301

  4. Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path.

    PubMed

    Herráez, Miguel Arevallilo; Burton, David R; Lalor, Michael J; Gdeisat, Munther A

    2002-12-10

    We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples.

  5. Fast Parallel Molecular Algorithms for DNA-Based Computation: Solving the Elliptic Curve Discrete Logarithm Problem over GF(2n)

    PubMed Central

    Li, Kenli; Zou, Shuting; Xv, Jin

    2008-01-01

    Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2n), n ∈ Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2n) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations. PMID:18431451

  6. NMAGIC: a fast parallel implementation of a χ2-made-to-measure algorithm for modelling observational data

    NASA Astrophysics Data System (ADS)

    de Lorenzi, Flavio; Debattista, Victor P.; Gerhard, Ortwin; Sambhus, Niranjan

    2007-03-01

    We describe a made-to-measure (M2M) algorithm for constructing N-particle models of stellar systems from observational data (χ2M2M), extending earlier ideas by Syer & Tremaine. The algorithm properly accounts for observational errors, is flexible, and can be applied to various systems and geometries. We implement this algorithm in a parallel code NMAGIC and carry out a sequence of tests to illustrate its power and performance. (i) We reconstruct an isotropic Hernquist model from density moments and projected kinematics and recover the correct differential energy distribution and intrinsic kinematics. (ii) We build a self-consistent oblate three-integral maximum rotator model and compare how the distribution function is recovered from integral field and slit kinematic data. (iii) We create a non-rotating and a figure rotating triaxial stellar particle model, reproduce the projected kinematics of the figure rotating system by a non-rotating system of the same intrinsic shape, and illustrate the signature of pattern rotation in this model. From these tests, we comment on the dependence of the results from χ2M2M on the initial model, the geometry, and the amount of available data.

  7. A fast and explicit algorithm for simulating the dynamics of small dust grains with smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Price, Daniel J.; Laibe, Guillaume

    2015-07-01

    We describe a simple method for simulating the dynamics of small grains in a dusty gas, relevant to micron-sized grains in the interstellar medium and grains of centimetre size and smaller in protoplanetary discs. The method involves solving one extra diffusion equation for the dust fraction in addition to the usual equations of hydrodynamics. This `diffusion approximation for dust' is valid when the dust stopping time is smaller than the computational timestep. We present a numerical implementation using smoothed particle hydrodynamics that is conservative, accurate and fast. It does not require any implicit timestepping and can be straightforwardly ported into existing 3D codes.

  8. A Fast Parallel Algorithm for Selected Inversion of Structured Sparse Matrices with Application to 2D Electronic Structure Calculations

    SciTech Connect

    Lin, Lin; Yang, Chao; Lu, Jiangfeng; Ying, Lexing; E, Weinan

    2009-09-25

    We present an efficient parallel algorithm and its implementation for computing the diagonal of $H^-1$ where $H$ is a 2D Kohn-Sham Hamiltonian discretized on a rectangular domain using a standard second order finite difference scheme. This type of calculation can be used to obtain an accurate approximation to the diagonal of a Fermi-Dirac function of $H$ through a recently developed pole-expansion technique \\cite{LinLuYingE2009}. The diagonal elements are needed in electronic structure calculations for quantum mechanical systems \\citeHohenbergKohn1964, KohnSham 1965,DreizlerGross1990. We show how elimination tree is used to organize the parallel computation and how synchronization overhead is reduced by passing data level by level along this tree using the technique of local buffers and relative indices. We analyze the performance of our implementation by examining its load balance and communication overhead. We show that our implementation exhibits an excellent weak scaling on a large-scale high performance distributed parallel machine. When compared with standard approach for evaluating the diagonal a Fermi-Dirac function of a Kohn-Sham Hamiltonian associated a 2D electron quantum dot, the new pole-expansion technique that uses our algorithm to compute the diagonal of $(H-z_i I)^-1$ for a small number of poles $z_i$ is much faster, especially when the quantum dot contains many electrons.

  9. Fast characterization of moment magnitude and focal mechanism in the context of tsunami warning in the NEAM region : W-phase and PDFM2 algorithms.

    NASA Astrophysics Data System (ADS)

    Schindelé, François; Roch, Julien; Duperray, Pierre; Reymond, Dominique

    2016-04-01

    Over past centuries, several large earthquakes (Mw ≥ 7.5) have been reported in the North East Atlantic and Mediterranenan sea (NEAM) region. Most of the tsunami potential seismic sources in the NEAM region, however, are in a magnitude range of 6.5 ≤ Mw ≤ 7.5 (e.g. tsunami triggered by the earthquake of Boumerdes in 2003 of Mw = 6.9). The CENALT (CENtre d'ALerte aux Tsunamis) in operation since 2012 as the French National Tsunami Warning Centre (NTWC) and Candidate Tsunami Service Provider (CTSP) has to issue warning messages within 15 minutes of the earthquake origin time. The warning level is currently based on a decision matrix depending on the magnitude, and the location of the hypocenter. Two seismic source inversion methods are implemented at CENALT: the W-phase algorithm, based on the so-called W-phase and PDFM2 algorithm , based on the surface waves and first P wave motions. They both give accurate moment magnitude and focal magnitude respectively in 10 min and 20 min. The results of the Mw magnitude, focal depth and type of fault (reverse, normal, strike-slip) are the most relevant parameters used to issue tsunami warnings. In this context, we assess the W-phase and PDFM2 methods with 29 events of magnitude Mw ≥ 5.8 for the period 2010-2015 in the NEAM region. Results with 10 and 20 min for the W-phase algorithm and with 20 and 30 min for the PDFM2 algorithm are compared to the Global Centroid Moment Tensor catalog. The W-phase and PDFM2 methods gives accurate results respectively in 10 min and 20 min. This work is funded by project ASTARTE -- Assessment, Strategy And Risk Reduction for Tsunamis in Europe - FP7-ENV2013 6.4-3, Grant 603839

  10. Fast chromatographic method for the determination of dyes in beverages by using high performance liquid chromatography--diode array detection data and second order algorithms.

    PubMed

    Culzoni, María J; Schenone, Agustina V; Llamas, Natalia E; Garrido, Mariano; Di Nezio, Maria S; Band, Beatriz S Fernández; Goicoechea, Héctor C

    2009-10-16

    A fast chromatographic methodology is presented for the analysis of three synthetic dyes in non-alcoholic beverages: amaranth (E123), sunset yellow FCF (E110) and tartrazine (E102). Seven soft drinks (purchased from a local supermarket) were homogenized, filtered and injected into the chromatographic system. Second order data were obtained by a rapid LC separation and DAD detection. A comparative study of the performance of two second order algorithms (MCR-ALS and U-PLS/RBL) applied to model the data, is presented. Interestingly, the data present time shift between different chromatograms and cannot be conveniently corrected to determine the above-mentioned dyes in beverage samples. This fact originates the lack of trilinearity that cannot be conveniently pre-processed and can hardly be modelled by using U-PLS/RBL algorithm. On the contrary, MCR-ALS has shown to be an excellent tool for modelling this kind of data allowing to reach acceptable figures of merit. Recovery values ranged between 97% and 105% when analyzing artificial and real samples were indicative of the good performance of the method. In contrast with the complete separation, which consumes 10 mL of methanol and 3 mL of 0.08 mol L(-1) ammonium acetate, the proposed fast chromatography method requires only 0.46 mL of methanol and 1.54 mL of 0.08 mol L(-1) ammonium acetate. Consequently, analysis time could be reduced up to 14.2% of the necessary time to perform the complete separation allowing saving both solvents and time, which are related to a reduction of both the costs per analysis and environmental impact.

  11. Fast chromatographic method for the determination of dyes in beverages by using high performance liquid chromatography--diode array detection data and second order algorithms.

    PubMed

    Culzoni, María J; Schenone, Agustina V; Llamas, Natalia E; Garrido, Mariano; Di Nezio, Maria S; Band, Beatriz S Fernández; Goicoechea, Héctor C

    2009-10-16

    A fast chromatographic methodology is presented for the analysis of three synthetic dyes in non-alcoholic beverages: amaranth (E123), sunset yellow FCF (E110) and tartrazine (E102). Seven soft drinks (purchased from a local supermarket) were homogenized, filtered and injected into the chromatographic system. Second order data were obtained by a rapid LC separation and DAD detection. A comparative study of the performance of two second order algorithms (MCR-ALS and U-PLS/RBL) applied to model the data, is presented. Interestingly, the data present time shift between different chromatograms and cannot be conveniently corrected to determine the above-mentioned dyes in beverage samples. This fact originates the lack of trilinearity that cannot be conveniently pre-processed and can hardly be modelled by using U-PLS/RBL algorithm. On the contrary, MCR-ALS has shown to be an excellent tool for modelling this kind of data allowing to reach acceptable figures of merit. Recovery values ranged between 97% and 105% when analyzing artificial and real samples were indicative of the good performance of the method. In contrast with the complete separation, which consumes 10 mL of methanol and 3 mL of 0.08 mol L(-1) ammonium acetate, the proposed fast chromatography method requires only 0.46 mL of methanol and 1.54 mL of 0.08 mol L(-1) ammonium acetate. Consequently, analysis time could be reduced up to 14.2% of the necessary time to perform the complete separation allowing saving both solvents and time, which are related to a reduction of both the costs per analysis and environmental impact. PMID:19748097

  12. Comment on "Replica-exchange-with-tunneling for fast exploration of protein landscapes" [J. Chem. Phys. 143, 224102 (2015)].

    PubMed

    Sakuraba, Shun

    2016-08-01

    In "Replica-exchange-with-tunneling for fast exploration of protein landscapes" [F. Yaşar et al., J. Chem. Phys. 143, 224102 (2015)], a novel sampling algorithm called "Replica Exchange with Tunneling" was proposed. However, due to its violation of the detailed balance, the algorithm fails to sample from the correct canonical ensemble.

  13. Comment on "Replica-exchange-with-tunneling for fast exploration of protein landscapes" [J. Chem. Phys. 143, 224102 (2015)

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun

    2016-08-01

    In "Replica-exchange-with-tunneling for fast exploration of protein landscapes" [F. Yaşar et al., J. Chem. Phys. 143, 224102 (2015)], a novel sampling algorithm called "Replica Exchange with Tunneling" was proposed. However, due to its violation of the detailed balance, the algorithm fails to sample from the correct canonical ensemble.

  14. Comment on "Replica-exchange-with-tunneling for fast exploration of protein landscapes" [J. Chem. Phys. 143, 224102 (2015)].

    PubMed

    Sakuraba, Shun

    2016-08-01

    In "Replica-exchange-with-tunneling for fast exploration of protein landscapes" [F. Yaşar et al., J. Chem. Phys. 143, 224102 (2015)], a novel sampling algorithm called "Replica Exchange with Tunneling" was proposed. However, due to its violation of the detailed balance, the algorithm fails to sample from the correct canonical ensemble. PMID:27497579

  15. Solving the chemical master equation by a fast adaptive finite state projection based on the stochastic simulation algorithm.

    PubMed

    Sidje, R B; Vo, H D

    2015-11-01

    The mathematical framework of the chemical master equation (CME) uses a Markov chain to model the biochemical reactions that are taking place within a biological cell. Computing the transient probability distribution of this Markov chain allows us to track the composition of molecules inside the cell over time, with important practical applications in a number of areas such as molecular biology or medicine. However the CME is typically difficult to solve, since the state space involved can be very large or even countably infinite. We present a novel way of using the stochastic simulation algorithm (SSA) to reduce the size of the finite state projection (FSP) method. Numerical experiments that demonstrate the effectiveness of the reduction are included.

  16. Fast computation of minimal cut sets in metabolic networks with a Berge algorithm that utilizes binary bit pattern trees.

    PubMed

    Jungreuthmayer, Christian; Beurton-Aimar, Marie; Zanghellini, Jürgen

    2013-01-01

    Minimal cut sets are a valuable tool for analyzing metabolic networks and for identifying optimal gene intervention strategies by eliminating unwanted metabolic functions and keeping desired functionality. Minimal cut sets rely on the concept of elementary flux modes, which are sets of indivisible metabolic pathways under steady-state condition. However, the computation of minimal cut sets is nontrivial, as even medium-sized metabolic networks with just 100 reactions easily have several hundred million elementary flux modes. We developed a minimal cut set tool that implements the well-known Berge algorithm and utilizes a novel approach to significantly reduce the program run time by using binary bit pattern trees. By using the introduced tree approach, the size of metabolic models that can be analyzed and optimized by minimal cut sets is pushed to new and considerably higher limits.

  17. General adaptive guidance using nonlinear programming constraint solving methods (FAST)

    NASA Astrophysics Data System (ADS)

    Skalecki, Lisa; Martin, Marc

    An adaptive, general purpose, constraint solving guidance algorithm called FAST (Flight Algorithm to Solve Trajectories) has been developed by the authors in response to the requirements for the Advanced Launch System (ALS). The FAST algorithm can be used for all mission phases for a wide range of Space Transportation Vehicles without code modification because of the general formulation of the nonlinear programming (NLP) problem, ad the general trajectory simulation used to predict constraint values. The approach allows on board re-targeting for severe weather and changes in payload or mission parameters, increasing flight reliability and dependability while reducing the amount of pre-flight analysis that must be performed. The algorithm is described in general in this paper. Three degree of freedom simulation results are presented for application of the algorithm to ascent and reentry phases of an ALS mission, and Mars aerobraking. Flight processor CPU requirement data is also shown.

  18. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    PubMed

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  19. Fast separable nonlocal means

    NASA Astrophysics Data System (ADS)

    Ghosh, Sanjay; Chaudhury, Kunal N.

    2016-03-01

    We propose a simple and fast algorithm called PatchLift for computing distances between patches (contiguous block of samples) extracted from a given one-dimensional signal. PatchLift is based on the observation that the patch distances can be efficiently computed from a matrix that is derived from the one-dimensional signal using lifting; importantly, the number of operations required to compute the patch distances using this approach does not scale with the patch length. We next demonstrate how PatchLift can be used for patch-based denoising of images corrupted with Gaussian noise. In particular, we propose a separable formulation of the classical nonlocal means (NLM) algorithm that can be implemented using PatchLift. We demonstrate that the PatchLift-based implementation of separable NLM is a few orders faster than standard NLM and is competitive with existing fast implementations of NLM. Moreover, its denoising performance is shown to be consistently superior to that of NLM and some of its variants, both in terms of peak signal-to-noise ratio/structural similarity index and visual quality.

  20. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  1. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. PMID:27627406

  2. Learning Maximal Entropy Models from finite size datasets: a fast Data-Driven algorithm allows to sample from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).

  3. Fast Simulation of 3-D Surface Flanging and Prediction of the Flanging Lines Based On One-Step Inverse Forming Algorithm

    SciTech Connect

    Bao Yidong; Hu Sibo; Lang Zhikui; Hu Ping

    2005-08-05

    A fast simulation scheme for 3D curved binder flanging and blank shape prediction of sheet metal based on one-step inverse finite element method is proposed, in which the total plasticity theory and proportional loading assumption are used. The scheme can be actually used to simulate 3D flanging with complex curve binder shape, and suitable for simulating any type of flanging model by numerically determining the flanging height and flanging lines. Compared with other methods such as analytic algorithm and blank sheet-cut return method, the prominent advantage of the present scheme is that it can directly predict the location of the 3D flanging lines when simulating the flanging process. Therefore, the prediction time of flanging lines will be obviously decreased. Two typical 3D curve binder flanging including stretch and shrink characters are simulated in the same time by using the present scheme and incremental FE non-inverse algorithm based on incremental plasticity theory, which show the validity and high efficiency of the present scheme.

  4. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  5. A fast algorithm for parabolic PDE-based inverse problems based on Laplace transforms and flexible Krylov solvers

    SciTech Connect

    Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.

    2015-10-15

    We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.

  6. A fast algorithm for Direct Numerical Simulation of natural convection flows in arbitrarily-shaped periodic domains

    NASA Astrophysics Data System (ADS)

    Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.

    2015-11-01

    A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.

  7. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  8. Evaluation of the influence of tumor location and size on the difference of dose calculation between Ray Tracing algorithm and Fast Monte Carlo algorithm in stereotactic body radiotherapy of non-small cell lung cancer using CyberKnife.

    PubMed

    Wu, Vincent W C; Tam, Kwok-wah; Tong, Shun-ming

    2013-09-06

    This study evaluated the extent of improvement in dose predication accuracy achieved by the Fast Monte Carlo algorithm (MC) compared to the Ray Tracing algorithm (RAT) in stereotactic body radiotherapy (SBRT) of non-small cell lung cancer (NSCLC), and how their differences were influenced by the tumor site and size. Thirty-three NSCLC patients treated with SBRT by CyberKnife in 2011 were recruited. They were divided into the central target group (n = 17) and peripheral target group (n = 16) according to the RTOG 0236 guidelines. Each group was further divided into the large and small target subgroups. After the computation of treatment plans using RAT, a MC plan was generated using the same patient data and treatment parameters. Apart from the target reference point dose measurements, various dose parameters for the planning target volume (PTV) and organs at risk (OARs) were assessed. In addition, the "Fractional Deviation" (FDev) was also calculated for comparison, which was defined as the ratio of the RAT and MC values. For peripheral lung cases, RAT produced significantly higher dose values in all the reference points than MC. The FDev of all reference point doses and dose parameters was greater in the small target than the large target subgroup. For central lung cases, there was no significant reference point and OAR dose differences between RAT and MC. When comparing between the small target and large target subgroups, the FDev values of all the dose parameters and reference point doses did not show significant difference. Despite the shorter computation time, RAT was inferior to MC, in which the target dose was usually overestimated. RAT would not be recommended for SBRT of peripheral lung tumors regardless of the target size. However, it could be considered for large central lung tumors because its performance was comparable to MC.

  9. Fast decision algorithms in low-power embedded processors for quality-of-service based connectivity of mobile sensors in heterogeneous wireless sensor networks.

    PubMed

    Jaraíz-Simón, María D; Gómez-Pulido, Juan A; Vega-Rodríguez, Miguel A; Sánchez-Pérez, Juan M

    2012-01-01

    When a mobile wireless sensor is moving along heterogeneous wireless sensor networks, it can be under the coverage of more than one network many times. In these situations, the Vertical Handoff process can happen, where the mobile sensor decides to change its connection from a network to the best network among the available ones according to their quality of service characteristics. A fitness function is used for the handoff decision, being desirable to minimize it. This is an optimization problem which consists of the adjustment of a set of weights for the quality of service. Solving this problem efficiently is relevant to heterogeneous wireless sensor networks in many advanced applications. Numerous works can be found in the literature dealing with the vertical handoff decision, although they all suffer from the same shortfall: a non-comparable efficiency. Therefore, the aim of this work is twofold: first, to develop a fast decision algorithm that explores the entire space of possible combinations of weights, searching that one that minimizes the fitness function; and second, to design and implement a system on chip architecture based on reconfigurable hardware and embedded processors to achieve several goals necessary for competitive mobile terminals: good performance, low power consumption, low economic cost, and small area integration. PMID:22438728

  10. A call center primer.

    PubMed

    Durr, W

    1998-01-01

    Call centers are strategically and tactically important to many industries, including the healthcare industry. Call centers play a key role in acquiring and retaining customers. The ability to deliver high-quality and timely customer service without much expense is the basis for the proliferation and expansion of call centers. Call centers are unique blends of people and technology, where performance indicates combining appropriate technology tools with sound management practices built on key operational data. While the technology is fascinating, the people working in call centers and the skill of the management team ultimately make a difference to their companies. PMID:10182518

  11. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  12. Calling All Trainers.

    ERIC Educational Resources Information Center

    Carolan, Mary D.; Doyle, John C.

    1998-01-01

    Describes how to establish and operate a call center that handles customer service, telemarketing, collections, and other customer-focused areas. Discusses the advantages of a call center, the new opportunities that will arise as a result of emerging technologies, and the challenges of recruiting, training, and retaining personnel. (JOW)

  13. Spectrographic phase-retrieval algorithm for femtosecond and attosecond pulses with frequency gaps

    NASA Astrophysics Data System (ADS)

    Seifert, B.; Wallentowitz, S.; Volkmann, U.; Hause, A.; Sperlich, K.; Stolz, H.

    2014-10-01

    We present a phase-reconstruction algorithm for a self-referenced spectrographic pulse characterization technique called “very advanced method for phase and intensity retrieval of e-fields” (VAMPIRE). This technique permits a spectral phase reconstruction of pulses with separated frequency components. The algorithm uses the particular characteristics of VAMPIRE spectrograms. It is a locally structured algorithm which is fast, robust, and it allows us to master stagnation problems. The algorithm is tested by use of both simulated and measured data.

  14. A scalable and practical one-pass clustering algorithm for recommender system

    NASA Astrophysics Data System (ADS)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  15. Multiple One-Dimensional Search (MODS) algorithm for fast optimization of laser-matter interaction by phase-only fs-laser pulse shaping

    NASA Astrophysics Data System (ADS)

    Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Solis, J.

    2014-09-01

    In this work, we have developed and implemented a powerful search strategy for optimization of nonlinear optical effects by means of femtosecond pulse shaping, based on topological concepts derived from quantum control theory. Our algorithm [Multiple One-Dimensional Search (MODS)] is based on deterministic optimization of a single solution rather than pseudo-random optimization of entire populations as done by commonly used evolutionary algorithms. We have tested MODS against a genetic algorithm in a nontrivial problem consisting in optimizing the Kerr gating signal (self-interaction) of a shaped laser pulse in a detuned Michelson interferometer configuration. The obtained results show that our search method (MODS) strongly outperforms the genetic algorithm in terms of both convergence speed and quality of the solution. These findings demonstrate the applicability of concepts of quantum control theory to nonlinear laser-matter interaction problems, even in the presence of significant experimental noise.

  16. Optimized Seizure Detection Algorithm: A Fast Approach for Onset of Epileptic in EEG Signals Using GT Discriminant Analysis and K-NN Classifier

    PubMed Central

    Rezaee, Kh.; Azizi, E.; Haddadnia, J.

    2016-01-01

    Background Epilepsy is a severe disorder of the central nervous system that predisposes the person to recurrent seizures. Fifty million people worldwide suffer from epilepsy; after Alzheimer’s and stroke, it is the third widespread nervous disorder. Objective In this paper, an algorithm to detect the onset of epileptic seizures based on the analysis of brain electrical signals (EEG) has been proposed. 844 hours of EEG were recorded form 23 pediatric patients consecutively with 163 occurrences of seizures. Signals had been collected from Children’s Hospital Boston with a sampling frequency of 256 Hz through 18 channels in order to assess epilepsy surgery. By selecting effective features from seizure and non-seizure signals of each individual and putting them into two categories, the proposed algorithm detects the onset of seizures quickly and with high sensitivity. Method In this algorithm, L-sec epochs of signals are displayed in form of a third-order tensor in spatial, spectral and temporal spaces by applying wavelet transform. Then, after applying general tensor discriminant analysis (GTDA) on tensors and calculating mapping matrix, feature vectors are extracted. GTDA increases the sensitivity of the algorithm by storing data without deleting them. Finally, K-Nearest neighbors (KNN) is used to classify the selected features. Results The results of simulating algorithm on algorithm standard dataset shows that the algorithm is capable of detecting 98 percent of seizures with an average delay of 4.7 seconds and the average error rate detection of three errors in 24 hours. Conclusion Today, the lack of an automated system to detect or predict the seizure onset is strongly felt.

  17. Optimized Seizure Detection Algorithm: A Fast Approach for Onset of Epileptic in EEG Signals Using GT Discriminant Analysis and K-NN Classifier

    PubMed Central

    Rezaee, Kh.; Azizi, E.; Haddadnia, J.

    2016-01-01

    Background Epilepsy is a severe disorder of the central nervous system that predisposes the person to recurrent seizures. Fifty million people worldwide suffer from epilepsy; after Alzheimer’s and stroke, it is the third widespread nervous disorder. Objective In this paper, an algorithm to detect the onset of epileptic seizures based on the analysis of brain electrical signals (EEG) has been proposed. 844 hours of EEG were recorded form 23 pediatric patients consecutively with 163 occurrences of seizures. Signals had been collected from Children’s Hospital Boston with a sampling frequency of 256 Hz through 18 channels in order to assess epilepsy surgery. By selecting effective features from seizure and non-seizure signals of each individual and putting them into two categories, the proposed algorithm detects the onset of seizures quickly and with high sensitivity. Method In this algorithm, L-sec epochs of signals are displayed in form of a third-order tensor in spatial, spectral and temporal spaces by applying wavelet transform. Then, after applying general tensor discriminant analysis (GTDA) on tensors and calculating mapping matrix, feature vectors are extracted. GTDA increases the sensitivity of the algorithm by storing data without deleting them. Finally, K-Nearest neighbors (KNN) is used to classify the selected features. Results The results of simulating algorithm on algorithm standard dataset shows that the algorithm is capable of detecting 98 percent of seizures with an average delay of 4.7 seconds and the average error rate detection of three errors in 24 hours. Conclusion Today, the lack of an automated system to detect or predict the seizure onset is strongly felt. PMID:27672628

  18. Weighted MinMax Algorithm for Color Image Quantization

    NASA Technical Reports Server (NTRS)

    Reitan, Paula J.

    1999-01-01

    The maximum intercluster distance and the maximum quantization error that are minimized by the MinMax algorithm are shown to be inappropriate error measures for color image quantization. A fast and effective (improves image quality) method for generalizing activity weighting to any histogram-based color quantization algorithm is presented. A new non-hierarchical color quantization technique called weighted MinMax that is a hybrid between the MinMax and Linde-Buzo-Gray (LBG) algorithms is also described. The weighted MinMax algorithm incorporates activity weighting and seeks to minimize WRMSE, whereby obtaining high quality quantized images with significantly less visual distortion than the MinMax algorithm.

  19. CALL on Mac.

    ERIC Educational Resources Information Center

    Fukuzawa, Jeannette L.; Lubin, Jan M.

    Five computer programs for the Macintosh that are geared for Computer-Assisted Language Learning (CALL) are described. All five programs allow the teacher to input material. The first program allows entry of new vocabulary lists including definition, a sentence in which the exact word is used, a fill-in-the-blank exercise, and the word's phonetics…

  20. When Crises Call

    ERIC Educational Resources Information Center

    Kisch, Marian

    2012-01-01

    Natural disasters, as well as crises of the man-made variety, call on leaders of school districts to manage scenarios impossible to predict and for which no amount of training can adequately prepare. One thing all major crises hold in common is their far-reaching effects, which can run the gamut from personal safety and mental well-being to the…

  1. Artificial Intelligence and CALL.

    ERIC Educational Resources Information Center

    Underwood, John H.

    The potential application of artificial intelligence (AI) to computer-assisted language learning (CALL) is explored. Two areas of AI that hold particular interest to those who deal with language meaning--knowledge representation and expert systems, and natural-language processing--are described and examples of each are presented. AI contribution…

  2. Wake-Up Call.

    ERIC Educational Resources Information Center

    Sartorius, Tara Cady

    2002-01-01

    Focuses on the artist, Laquita Thomson, whose inspiration are the stars and space. Discusses her series called, "Celestial Happenings: Stars Fell on Alabama." Describes one event that inspired an art work when a meteor crashed into an Alabama home. Includes lessons for various subject areas. (CMK)

  3. A lattice-free concept lattice update algorithm

    NASA Astrophysics Data System (ADS)

    Outrata, Jan

    2016-02-01

    Upon a change of input data, one usually wants an update of output computed from the data rather than recomputing the whole output over again. In Formal Concept Analysis, update of concept lattice of input data when introducing new objects to the data can be done by any of the so-called incremental algorithms for computing concept lattice. The algorithms use and update the lattice while introducing new objects to input data one by one. The present concept lattice of input data without the new objects is thus required by the computation. However, the lattice can be large and may not fit into memory. In this paper, we propose an efficient algorithm for updating the lattice from the present and new objects only, not requiring the possibly large concept lattice of present objects. The algorithm results as a modification of the Close-by-One algorithm for computing the set of all formal concepts, or its modifications like Fast Close-by-One, Parallel Close-by-One or Parallel Fast Close-by-One, to compute new and modified formal concepts and the changes of the lattice order relation only. The algorithm can be used not only for updating the lattice when new objects are introduced but also when some existing objects are removed from the input data or attributes of the objects are changed. We describe the algorithm, discuss efficiency issues and present an experimental evaluation of its performance and a comparison with the AddIntent incremental algorithm for computing concept lattice.

  4. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data

    PubMed Central

    Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars

    2014-01-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018

  5. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data.

    PubMed

    Ekberg, Peter; Su, Rong; Chang, Ernest W; Yun, Seok Hyun; Mattsson, Lars

    2014-02-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 μm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness.

  6. A Fast Hermite Transform★

    PubMed Central

    Leibon, Gregory; Rockmore, Daniel N.; Park, Wooram; Taintor, Robert; Chirikjian, Gregory S.

    2008-01-01

    We present algorithms for fast and stable approximation of the Hermite transform of a compactly supported function on the real line, attainable via an application of a fast algebraic algorithm for computing sums associated with a three-term relation. Trade-offs between approximation in bandlimit (in the Hermite sense) and size of the support region are addressed. Numerical experiments are presented that show the feasibility and utility of our approach. Generalizations to any family of orthogonal polynomials are outlined. Applications to various problems in tomographic reconstruction, including the determination of protein structure, are discussed. PMID:20027202

  7. TSaT-MUSIC: a novel algorithm for rapid and accurate ultrasonic 3D localization

    NASA Astrophysics Data System (ADS)

    Mizutani, Kyohei; Ito, Toshio; Sugimoto, Masanori; Hashizume, Hiromichi

    2011-12-01

    We describe a fast and accurate indoor localization technique using the multiple signal classification (MUSIC) algorithm. The MUSIC algorithm is known as a high-resolution method for estimating directions of arrival (DOAs) or propagation delays. A critical problem in using the MUSIC algorithm for localization is its computational complexity. Therefore, we devised a novel algorithm called Time Space additional Temporal-MUSIC, which can rapidly and simultaneously identify DOAs and delays of mul-ticarrier ultrasonic waves from transmitters. Computer simulations have proved that the computation time of the proposed algorithm is almost constant in spite of increasing numbers of incoming waves and is faster than that of existing methods based on the MUSIC algorithm. The robustness of the proposed algorithm is discussed through simulations. Experiments in real environments showed that the standard deviation of position estimations in 3D space is less than 10 mm, which is satisfactory for indoor localization.

  8. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  9. Just call it "treatment".

    PubMed

    Friedmann, Peter D; Schwartz, Robert P

    2012-01-01

    Although many in the addiction treatment field use the term "medication-assisted treatment" to describe a combination of pharmacotherapy and counseling to address substance dependence, research has demonstrated that opioid agonist treatment alone is effective in patients with opioid dependence, regardless of whether they receive counseling. The time has come to call pharmacotherapy for such patients just "treatment". An explicit acknowledgment that medication is an essential first-line component in the successful management of opioid dependence. PMID:23186149

  10. Automated call tracking systems

    SciTech Connect

    Hardesty, C.

    1993-03-01

    User Services groups are on the front line with user support. We are the first to hear about problems. The speed, accuracy, and intelligence with which we respond determines the user`s perception of our effectiveness and our commitment to quality and service. To keep pace with the complex changes at our sites, we must have tools to help build a knowledge base of solutions, a history base of our users, and a record of every problem encountered. Recently, I completed a survey of twenty sites similar to the National Energy Research Supercomputer Center (NERSC). This informal survey reveals that 27% of the sites use a paper system to log calls, 60% employ homegrown automated call tracking systems, and 13% use a vendor-supplied system. Fifty-four percent of those using homegrown systems are exploring the merits of switching to a vendor-supplied system. The purpose of this paper is to provide guidelines for evaluating a call tracking system. In addition, insights are provided to assist User Services groups in selecting a system that fits their needs.

  11. RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.

  12. A fast, space-efficient average-case algorithm for the 'Greedy' Triangulation of a point set, and a proof that the Greedy Triangulation is not approximately optimal

    NASA Technical Reports Server (NTRS)

    Manacher, G. K.; Zobrist, A. L.

    1979-01-01

    The paper addresses the problem of how to find the Greedy Triangulation (GT) efficiently in the average case. It is noted that the problem is open whether there exists an efficient approximation algorithm to the Optimum Triangulation. It is first shown how in the worst case, the GT may be obtained in time O(n to the 3) and space O(n). Attention is then given to how the algorithm may be slightly modified to produce a time O(n to the 2), space O(n) solution in the average case. Finally, it is mentioned that Gilbert has found a worst case solution using totally different techniques that require space O(n to the 2) and time O(n to the 2 log n).

  13. Interior segment regrowth configurational-bias algorithm for the efficient sampling and fast relaxation of coarse-grained polyethylene and polyoxyethylene melts on a high coordination lattice

    NASA Astrophysics Data System (ADS)

    Rane, Sagar S.; Mattice, Wayne L.

    2005-06-01

    We demonstrate the application of a modified form of the configurational-bias algorithm for the simulation of chain molecules on the second-nearest-neighbor-diamond lattice. Using polyethylene and poly(ethylene-oxide) as model systems we show that the present configurational-bias algorithm can increase the speed of the equilibration by at least a factor of 2-3 or more as compared to the previous method of using a combination of single-bead and pivot moves along with the Metropolis sampling scheme [N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, J. Chem. Phys. 21, 1087 (1953)]. The increase in the speed of the equilibration is found to be dependent on the interactions (i.e., the polymer being simulated) and the molecular weight of the chains. In addition, other factors not considered, such as the density, would also have a significant effect. The algorithm is an extension of the conventional configurational-bias method adapted to the regrowth of interior segments of chain molecules. Appropriate biasing probabilities for the trial moves as outlined by Jain and de Pablo for the configurational-bias scheme of chain ends, suitably modified for the interior segments, are utilized [T. S. Jain and J. J. de Pablo, in Simulation Methods for Polymers, edited by M. Kotelyanskii and D. N. Theodorou (Marcel Dekker, New York, 2004), pp. 223-255]. The biasing scheme satisfies the condition of detailed balance and produces efficient sampling with the correct equilibrium probability distribution of states. The method of interior regrowth overcomes the limitations of the original configurational-bias scheme and allows for the simulation of polymers of higher molecular weight linear chains and ring polymers which lack chain ends.

  14. Fast-Polynomial-Transform Program

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Chu, Y. F.

    1987-01-01

    Computer program uses fast-polynomial-transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional cyclic convolutions converted to one-dimensional convolutions in polynomial rings. Program decomposes cyclic polynomials into polynomial convolutions of same length. Only FPT's and fast Fourier transforms of same length required. Modular approach saves computional resources. Program written in C.

  15. Compression and fast retrieval of SNP data

    PubMed Central

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-01-01

    Motivation: The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. Results: We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Availability and implementation: Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. Contact: sambofra@dei.unipd.it or cobelli@dei.unipd.it. PMID:25064564

  16. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063

  17. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching.

  18. Fast and practical parallel polynomial interpolation

    SciTech Connect

    Egecioglu, O.; Gallopoulos, E.; Koc, C.K.

    1987-01-01

    We present fast and practical parallel algorithms for the computation and evaluation of interpolating polynomials. The algorithms make use of fast parallel prefix techniques for the calculation of divided differences in the Newton representation of the interpolating polynomial. For n + 1 given input pairs the proposed interpolation algorithm requires 2 (log (n + 1)) + 2 parallel arithmetic steps and circuit size O(n/sup 2/). The algorithms are numerically stable and their floating-point implementation results in error accumulation similar to that of the widely used serial algorithms. This is in contrast to other fast serial and parallel interpolation algorithms which are subject to much larger roundoff. We demonstrate that in a distributed memory environment context, a cube connected system is very suitable for the algorithms' implementation, exhibiting very small communication cost. As further advantages we note that our techniques do not require equidistant points, preconditioning, or use of the Fast Fourier Transform. 21 refs., 4 figs.

  19. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  20. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies

    PubMed Central

    Essa, Khalid S.

    2013-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472

  1. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies.

    PubMed

    Essa, Khalid S

    2014-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.

  2. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    PubMed Central

    Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao

    2013-01-01

    In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments. PMID:23296331

  3. Eruptions on the fast track: application of Particle Tracking Velocimetry algorithms to visual and thermal high-speed videos of Strombolian explosions

    NASA Astrophysics Data System (ADS)

    Gaudin, Damien; Monica, Moroni; Jacopo, Taddeucci; Luca, Shindler; Piergiorgio, Scarlato

    2013-04-01

    Strombolian eruptions are characterized by mild, frequent explosions that eject gas and ash- to bomb-sized pyroclasts into the atmosphere. Studying these explosions is crucial, both for direct hazard assessment and for understanding eruption dynamics. Conventional thermal and optical imaging already allows characterizing several eruptive processes, but the quantification of key parameters linked to magma properties and conduit processes requires acquiring images at higher frequency. For example, high speed imaging already demonstrated how the size and the pressure of the gas bubble are linked to the decay of the ejection velocity of the particles, and the origin of the bombs, either fresh or recycled material, could be linked to their thermal evolution. However, the manual processing of the images is time consuming. Consequently, it does not allows neither the routine monitoring nor averaged statistics, since only a few relevant particles - usually the fastest - of a few explosions can be taken into account. In order to understand the dynamics of strombolian eruption, and particularly their cyclic behavior, the quantification of the total mass, heat and energy discharge are a crucial point. In this study, we use a Particle Tracking Velocimetry (PTV) algorithm jointly to traditional images processing to automatically extract the above parameters from visible and thermal high-speed videos of individual Strombolian explosions. PTV is an analysis technique where each single particle is detected and tracked during a series of images. Velocity, acceleration, and temperature can then be deduced and time averaged to get an extensive overview of each explosion. The suitability of PTV and its potential limitations in term of detection and representativity is investigated in various explosions of Stromboli (Italy), Yasur (Vanuatu) and Fuego (Guatemala) volcanoes. On most event, multiple sub-explosion are visible. In each sub-explosion, trends are noticeable : (1) the ejection

  4. FAST Conformational Searches by Balancing Exploration/Exploitation Trade-Offs.

    PubMed

    Zimmerman, Maxwell I; Bowman, Gregory R

    2015-12-01

    Molecular dynamics simulations are a powerful means of understanding conformational changes. However, it is still difficult to simulate biologically relevant time scales without the use of specialized supercomputers. Here, we introduce a goal-oriented sampling method, called fluctuation amplification of specific traits (FAST), for extending the capabilities of commodity hardware. This algorithm rapidly searches conformational space for structures with desired properties by balancing trade-offs between focused searches around promising solutions (exploitation) and trying novel solutions (exploration). FAST was inspired by the hypothesis that many physical properties have an overall gradient in conformational space, akin to the energetic gradients that are known to guide proteins to their folded states. For example, we expect that transitioning from a conformation with a small solvent-accessible surface area to one with a large surface area will require passing through a series of conformations with steadily increasing surface areas. We demonstrate that such gradients are common through retrospective analysis of existing Markov state models (MSMs). Then we design the FAST algorithm to exploit these gradients to find structures with desired properties by (1) recognizing and amplifying structural fluctuations along gradients that optimize a selected physical property whenever possible, (2) overcoming barriers that interrupt these overall gradients, and (3) rerouting to discover alternative paths when faced with insurmountable barriers. To test FAST, we compare its performance to other methods for three common types of problems: (1) identifying unexpected binding pockets, (2) discovering the preferred paths between specific structures, and (3) folding proteins. Our conservative estimate is that FAST outperforms conventional simulations and an adaptive sampling algorithm by at least an order of magnitude. Furthermore, FAST yields both the proper thermodynamics and

  5. FAST Conformational Searches by Balancing Exploration/Exploitation Trade-Offs.

    PubMed

    Zimmerman, Maxwell I; Bowman, Gregory R

    2015-12-01

    Molecular dynamics simulations are a powerful means of understanding conformational changes. However, it is still difficult to simulate biologically relevant time scales without the use of specialized supercomputers. Here, we introduce a goal-oriented sampling method, called fluctuation amplification of specific traits (FAST), for extending the capabilities of commodity hardware. This algorithm rapidly searches conformational space for structures with desired properties by balancing trade-offs between focused searches around promising solutions (exploitation) and trying novel solutions (exploration). FAST was inspired by the hypothesis that many physical properties have an overall gradient in conformational space, akin to the energetic gradients that are known to guide proteins to their folded states. For example, we expect that transitioning from a conformation with a small solvent-accessible surface area to one with a large surface area will require passing through a series of conformations with steadily increasing surface areas. We demonstrate that such gradients are common through retrospective analysis of existing Markov state models (MSMs). Then we design the FAST algorithm to exploit these gradients to find structures with desired properties by (1) recognizing and amplifying structural fluctuations along gradients that optimize a selected physical property whenever possible, (2) overcoming barriers that interrupt these overall gradients, and (3) rerouting to discover alternative paths when faced with insurmountable barriers. To test FAST, we compare its performance to other methods for three common types of problems: (1) identifying unexpected binding pockets, (2) discovering the preferred paths between specific structures, and (3) folding proteins. Our conservative estimate is that FAST outperforms conventional simulations and an adaptive sampling algorithm by at least an order of magnitude. Furthermore, FAST yields both the proper thermodynamics and

  6. Easy system call tracing for Plan 9.

    SciTech Connect

    Minnich, Ronald G.

    2010-09-01

    Tracing system calls makes debugging easy and fast. On Plan 9, traditionally, system call tracing has been implemented with acid. New systems do not always implement all the capabilities needed for Acid, particularly the ability to rewrite the process code space to insert breakpoints. Architecture support libraries are not always available for Acid, or may not work even on a supported architecture. The requirement that Acid's libraries be available can be a problem on systems with a very small memory footprint, such as High Performance Computing systems where every Kbyte counts. Finally, Acid tracing is inconvenient in the presence of forks, which means tracing shell pipelines is particularly troublesome. The strace program available on most Unix systems is far more convenient to use and more capable than Acid for system call tracing. A similar system on Plan 9 can simplify troubleshooting. We have built a system calling tracing capability into the Plan 9 kernel. It has proven to be more convenient than strace in programming effort. One can write a shell script to implement tracing, and the C code to implement an strace equivalent is several orders of magnitude smaller.

  7. Genetic algorithms for multicriteria shape optimization of induction furnace

    NASA Astrophysics Data System (ADS)

    Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo

    2012-09-01

    In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.

  8. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  9. I. Thermal evolution of Ganymede and implications for surface features. II. Magnetohydrodynamic constraints on deep zonal flow in the giant planets. III. A fast finite-element algorithm for two-dimensional photoclinometry

    SciTech Connect

    Kirk, R.L.

    1987-01-01

    Thermal evolution of Ganymede from a hot start is modeled. On cooling ice I forms above the liquid H/sub 2/O and dense ices at higher entropy below it. A novel diapiric instability is proposed to occur if the ocean thins enough, mixing these layers and perhaps leading to resurfacing and groove formation. Rising warm-ice diapirs may cause a dramatic heat pulse and fracturing at the surface, and provide material for surface flows. Timing of the pulse depends on ice rheology but could agree with crater-density dates for resurfacing. Origins of the Ganymede-Callisto dichotomy in light of the model are discussed. Based on estimates of the conductivity of H/sub 2/ (Jupiter, Saturn) and H/sub 2/O (Uranus, Neptune), the zonal winds of the giant planets will, if they penetrate below the visible atmosphere, interact with the magnetic field well outside the metallic core. The scaling argument is supported by a model with zonal velocity constant on concentric cylinders, the Lorentz torque on each balanced by viscous stresses. The problem of two-dimensional photoclinometry, i.e. reconstruction of a surface from its image, is formulated in terms of finite elements and a fast algorithm using Newton-SOR iteration accelerated by multigridding is presented.

  10. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  11. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining. PMID:26751200

  12. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining.

  13. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction

    PubMed Central

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining. PMID:26751200

  14. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  15. Fast polyhedral cell sorting for interactive rendering of unstructured grids

    SciTech Connect

    Combra, J; Klosowski, J T; Max, N; Silva, C T; Williams, P L

    1998-10-30

    Direct volume rendering based on projective methods works by projecting, in visibility order, the polyhedral cells of a mesh onto the image plane, and incrementally compositing the cell's color and opacity into the final image. Crucial to this method is the computation of a visibility ordering of the cells. If the mesh is ''well-behaved'' (acyclic and convex), then the MPVO method of Williams provides a very fast sorting algorithm; however, this method only computes an approximate ordering in general datasets, resulting in visual artifacts when rendered. A recent method of Silva et al. removed the assumption that the mesh is convex, by means of a sweep algorithm used in conjunction with the MPVO method; their algorithm is substantially faster than previous exact methods for general meshes. In this paper we propose a new technique, which we call BSP-XMPVO, which is based on a fast and simple way of using binary space partitions on the boundary elements of the mesh to augment the ordering produced by MPVO. Our results are shown to be orders of magnitude better than previous exact methods of sorting cells.

  16. An Intrusion Detection Algorithm Based On NFPA

    NASA Astrophysics Data System (ADS)

    Anming, Zhong

    A process oriented intrusion detection algorithm based on Probabilistic Automaton with No Final probabilities (NFPA) is introduced, system call sequence of process is used as the source data. By using information in system call sequence of normal process and system call sequence of anomaly process, the anomaly detection and the misuse detection are efficiently combined. Experiments show better performance of our algorithm compared to the classical algorithm in this field.

  17. The fast Hartley transform

    NASA Astrophysics Data System (ADS)

    Mar, Mark H.

    1990-11-01

    The purpose of this paper is to report the results of testing the fast Hartley transform (FHT) and comparing it with the fast Fourier transform (FFT). All the definitions and equations in this paper are quoted and cited from the series of references. The author of this report developed a FORTRAN program which computes the Hartley transform. He tested the program with a generalized electromagnetic pulse waveform and verified the results with the known value. Fourier analysis is an essential tool to obtain frequency domain information from transient time domain signals. The FFT is a popular tool to process many of today's audio and electromagnetic signals. System frequency response, digital filtering of signals, and signal power spectrum are the most practical applications of the FFT. However, the Fourier integral transform of the FFT requires computer resources appropriate for the complex arithmetic operations. On the other hand, the FHT can accomplish the same results faster and requires fewer computer resources. The FHT is twice as fast as the FFT, uses only half the computer resources, and so could be more useful than the FFT in typical applications such as spectral analysis, signal processing, and convolution. This paper presents a FORTRAN computer program for the FHT algorithm along with a brief description and compares the results and performance of the FHT and the FFT algorithms.

  18. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  19. New stereo matching algorithm

    NASA Astrophysics Data System (ADS)

    Ahmed, Yasser A.; Afifi, Hossam; Rubino, Gerardo

    1999-05-01

    This paper present a new algorithm for stereo matching. The main idea is to decompose the original problem into independent hierarchical and more elementary problems that can be solved faster without any complicated mathematics using BBD. To achieve that, we use a new image feature called 'continuity feature' instead of classical noise. This feature can be extracted from any kind of images by a simple process and without using a searching technique. A new matching technique is proposed to match the continuity feature. The new algorithm resolves the main disadvantages of feature based stereo matching algorithms.

  20. Fast valve

    DOEpatents

    Van Dyke, William J.

    1992-01-01

    A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing.

  1. Fast valve

    DOEpatents

    Van Dyke, W.J.

    1992-04-07

    A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing. 4 figs.

  2. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT

  3. A new algorithm for attitude-independent magnetometer calibration

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Shuster, Malcolm D.

    1994-01-01

    A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.

  4. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  5. Quantum gate decomposition algorithms.

    SciTech Connect

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  6. Acoustic signal detection of manatee calls

    NASA Astrophysics Data System (ADS)

    Niezrecki, Christopher; Phillips, Richard; Meyer, Michael; Beusse, Diedrich O.

    2003-04-01

    The West Indian manatee (trichechus manatus latirostris) has become endangered partly because of a growing number of collisions with boats. A system to warn boaters of the presence of manatees, that can signal to boaters that manatees are present in the immediate vicinity, could potentially reduce these boat collisions. In order to identify the presence of manatees, acoustic methods are employed. Within this paper, three different detection algorithms are used to detect the calls of the West Indian manatee. The detection systems are tested in the laboratory using simulated manatee vocalizations from an audio compact disc. The detection method that provides the best overall performance is able to correctly identify ~=96% of the manatee vocalizations. However the system also results in a false positive rate of ~=16%. The results of this work may ultimately lead to the development of a manatee warning system that can warn boaters of the presence of manatees.

  7. Learning as Calling and Responding

    ERIC Educational Resources Information Center

    Jons, Lotta

    2014-01-01

    According to Martin Buber's philosophy of dialogue, our being-in-the-world is to be conceived of as an existential dialogue. Elsewhere, I have conceptualized the teacher-student-relation accordingly (see Jons 2008), as a matter of calling and responding. The conceptualization rests on a secularised notion of vocation, paving way for…

  8. An Evaluation Framework for CALL

    ERIC Educational Resources Information Center

    McMurry, Benjamin L.; Williams, David Dwayne; Rich, Peter J.; Hartshorn, K. James

    2016-01-01

    Searching prestigious Computer-assisted Language Learning (CALL) journals for references to key publications and authors in the field of evaluation yields a short list. The "American Journal of Evaluation"--the flagship journal of the American Evaluation Association--is only cited once in both the "CALICO Journal and Language…

  9. Ebola: a call to action.

    PubMed

    2014-09-01

    The size, speed and potential reach of the 2014 Ebola virus outbreak in West Africa presents a wake-up call to the research and pharmaceutical communities - and to federal governments - of the continuing need to invest resources in the study and cure of emerging infectious diseases.

  10. Formative Considerations Using Integrative CALL.

    ERIC Educational Resources Information Center

    Callahan, Philip; Shaver, Peter

    2001-01-01

    Addresses technical and learning issues relating to a formative implementation of a computer assisted language learning (CALL) browser-based intermediate Russian program. Instruction took place through a distance education implementation and in a grouped classroom using a local-area network. Learners indicated the software was clear, motivating,…

  11. Leveraging Call Center Logs for Customer Behavior Prediction

    NASA Astrophysics Data System (ADS)

    Parvathy, Anju G.; Vasudevan, Bintu G.; Kumar, Abhishek; Balakrishnan, Rajesh

    Most major businesses use business process outsourcing for performing a process or a part of a process including financial services like mortgage processing, loan origination, finance and accounting and transaction processing. Call centers are used for the purpose of receiving and transmitting a large volume of requests through outbound and inbound calls to customers on behalf of a business. In this paper we deal specifically with the call centers notes from banks. Banks as financial institutions provide loans to non-financial businesses and individuals. Their call centers act as the nuclei of their client service operations and log the transactions between the customer and the bank. This crucial conversation or information can be exploited for predicting a customer’s behavior which will in turn help these businesses to decide on the next action to be taken. Thus the banks save considerable time and effort in tracking delinquent customers to ensure minimum subsequent defaulters. Majority of the time the call center notes are very concise and brief and often the notes are misspelled and use many domain specific acronyms. In this paper we introduce a novel domain specific spelling correction algorithm which corrects the misspelled words in the call center logs to meaningful ones. We also discuss a procedure that builds the behavioral history sequences for the customers by categorizing the logs into one of the predefined behavioral states. We then describe a pattern based predictive algorithm that uses temporal behavioral patterns mined from these sequences to predict the customer’s next behavioral state.

  12. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  13. A novel algorithm with differential evolution and coral reef optimization for extreme learning machine training.

    PubMed

    Yang, Zhiyong; Zhang, Taohong; Zhang, Dezheng

    2016-02-01

    Extreme learning machine (ELM) is a novel and fast learning method to train single layer feed-forward networks. However due to the demand for larger number of hidden neurons, the prediction speed of ELM is not fast enough. An evolutionary based ELM with differential evolution (DE) has been proposed to reduce the prediction time of original ELM. But it may still get stuck at local optima. In this paper, a novel algorithm hybridizing DE and metaheuristic coral reef optimization (CRO), which is called differential evolution coral reef optimization (DECRO), is proposed to balance the explorative power and exploitive power to reach better performance. The thought and the implement of DECRO algorithm are discussed in this article with detail. DE, CRO and DECRO are applied to ELM training respectively. Experimental results show that DECRO-ELM can reduce the prediction time of original ELM, and obtain better performance for training ELM than both DE and CRO.

  14. Call for improving air quality

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    2013-01-01

    The European Environmental Bureau (EEB), a federation of citizen organizations, has called for stricter policies in Europe to protect human health and the environment. "Air pollution emanates from sources all around us, be they cars, industrial plants, shipping, agriculture, or waste. The [European Union] must propose ambitious legislation to address all of these sources if it is to tackle the grave public health consequences of air pollution," EEB secretary general Jeremy Wates said on 8 January.

  15. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  16. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.

    PubMed

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.

  17. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU

    PubMed Central

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507

  18. F2Dock: Fast Fourier Protein-Protein Docking

    PubMed Central

    Bajaj, Chandrajit; Chowdhury, Rezaul; Siddavanahalli, Vinay

    2009-01-01

    The functions of proteins is often realized through their mutual interactions. Determining a relative transformation for a pair of proteins and their conformations which form a stable complex, reproducible in nature, is known as docking. It is an important step in drug design, structure determination and understanding function and structure relationships. In this paper we extend our non-uniform fast Fourier transform docking algorithm to include an adaptive search phase (both translational and rotational) and thereby speed up its execution. We have also implemented a multithreaded version of the adaptive docking algorithm for even faster execution on multicore machines. We call this protein-protein docking code F2Dock (F2 = Fast Fourier). We have calibrated F2Dock based on an extensive experimental study on a list of benchmark complexes and conclude that F2Dock works very well in practice. Though all docking results reported in this paper use shape complementarity and Coulombic potential based scores only, F2Dock is structured to incorporate Lennard-Jones potential and re-ranking docking solutions based on desolvation energy. PMID:21071796

  19. Fast and flexible interpolation via PUM with applications in population dynamics

    NASA Astrophysics Data System (ADS)

    Cavoretto, Roberto; De Rossi, Alessandra; Perracchione, Emma

    2016-06-01

    In this paper a new fast and flexible interpolation tool is shown. The Partition of Unity Method (PUM) is performed using Radial Basis Functions (RBFs) as local approximants. In particular, we present a new space-partitioning data structure extremely useful in applications because of its independence from the problem geometry. An application of such algorithm, in the context of wild herbivores in forests, shows that the ecosystem of the considered natural park is in a very delicate situation, for which the animal population could become extinguished. The determination of the so-called sensitivity surfaces, obtained with the new versatile partitioning structure, indicates some possible preventive measures to the park administrators.

  20. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  1. Fast Randomized STDMA Link Scheduling

    NASA Astrophysics Data System (ADS)

    Gomez, Sergio; Gras, Oriol; Friderikos, Vasilis

    In this paper a fast randomized parallel link swap based packing (RSP) algorithm for timeslot allocation in a spatial time division multiple access (STDMA) wireless mesh network is presented. The proposed randomized algorithm extends several greedy scheduling algorithms that utilize the physical interference model by applying a local search that leads to a substantial improvement in the spatial timeslot reuse. Numerical simulations reveal that compared to previously scheduling schemes the proposed randomized algorithm can achieve a performance gain of up to 11%. A significant benefit of the proposed scheme is that the computations can be parallelized and therefore can efficiently utilize commoditized and emerging multi-core and/or multi-CPU processors.

  2. Parallel-access memory management using fast-fits

    SciTech Connect

    Johnson, T.

    1994-12-01

    The two most common approaches to managing shared-access memory-free lists and buddy system-have significant drawbacks. Free list algorithms have poor memory access characteristics, and buddy systems utilize their space inefficiently. In this paper, we present an alternative approach to parallel-access memory management based on the fast-fits algorithm. A fast-fits memory manager stores free blocks in a tree structure, providing fast access and efficient space use. Since the fast-fits algorithm accesses fewer blocks than a free list algorithm, it reduces the amount of cache invalidation overhead due to the memory manager. Our performance experiments show that the parallel-access fast-fits memory manager allows significantly greater access rates than a serial-access fast-fits memory manager does. We not that shared-memory multiprocessor systems need efficient dynamic storage allocators, both for system purposes and to support parallel programs.

  3. Improving the efficiency of deconvolution algorithms for sound source localization.

    PubMed

    Lylloff, Oliver; Fernández-Grande, Efrén; Agerkvist, Finn; Hald, Jørgen; Roig, Elisabet Tiana; Andersen, Martin S

    2015-07-01

    The localization of sound sources with delay-and-sum (DAS) beamforming is limited by a poor spatial resolution-particularly at low frequencies. Various methods based on deconvolution are examined to improve the resolution of the beamforming map, which can be modeled by a convolution of the unknown acoustic source distribution and the beamformer's response to a point source, i.e., point-spread function. A significant limitation of deconvolution is, however, an additional computational effort compared to beamforming. In this paper, computationally efficient deconvolution algorithms are examined with computer simulations and experimental data. Specifically, the deconvolution problem is solved with a fast gradient projection method called Fast Iterative Shrikage-Thresholding Algorithm (FISTA), and compared with a Fourier-based non-negative least squares algorithm. The results indicate that FISTA tends to provide an improved spatial resolution and is up to 30% faster and more robust to noise. In the spirit of reproducible research, the source code is available online. PMID:26233017

  4. ALGORITHM FOR THE EVALUATION OF REDUCED WIGNER MATRICES

    SciTech Connect

    Prezeau, G.; Reinecke, M.

    2010-10-15

    Algorithms for the fast and exact computation of Wigner matrices are described and their application to a fast and massively parallel 4{pi} convolution code between a beam and a sky is also presented.

  5. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  6. Fast Fuzzy Arithmetic Operations

    NASA Technical Reports Server (NTRS)

    Hampton, Michael; Kosheleva, Olga

    1997-01-01

    In engineering applications of fuzzy logic, the main goal is not to simulate the way the experts really think, but to come up with a good engineering solution that would (ideally) be better than the expert's control, In such applications, it makes perfect sense to restrict ourselves to simplified approximate expressions for membership functions. If we need to perform arithmetic operations with the resulting fuzzy numbers, then we can use simple and fast algorithms that are known for operations with simple membership functions. In other applications, especially the ones that are related to humanities, simulating experts is one of the main goals. In such applications, we must use membership functions that capture every nuance of the expert's opinion; these functions are therefore complicated, and fuzzy arithmetic operations with the corresponding fuzzy numbers become a computational problem. In this paper, we design a new algorithm for performing such operations. This algorithm is applicable in the case when negative logarithms - log(u(x)) of membership functions u(x) are convex, and reduces computation time from O(n(exp 2))to O(n log(n)) (where n is the number of points x at which we know the membership functions u(x)).

  7. Call to Restore Mesopotamian Marshlands

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    Call to restore Mesopotamian marshlands When the current military conflict in Iraq has concluded, a rehabilitation of that country should include a full assessment and action plan for restoring the marshlands of Mesopotamia, the United Nations Environment Programme said on 22 March. The marshlands, also known as the Fertile Crescent, could disappear within three to five years, according to UNEP. UNEP Executive Director Klaus Toepfer said the loss of the marshlands ``is an environmental catastrophe for this region and underscores the huge pressures facing wetlands and freshwater ecosystems across the world.''

  8. Report calls for riparian protection

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    A 22 March report by The (U.S.) National Academies calls for the protection and restoration of riparian areas in the United States. However, it concedes that key difficulties in this endeavor include the lack of basic information about the extent and ecological health of these areas, and even a precise ecological definition of what a riparian area is.The report, “Riparian Areas: Functions and Strategies for Management” prepared by the Water Science and Technology Board of the National Research Council, states that “restoration of riparian functions along America's water bodies should be a national goal.”

  9. Multipole Algorithms for Molecular Dynamics Simulation on High Performance Computers.

    NASA Astrophysics Data System (ADS)

    Elliott, William Dewey

    1995-01-01

    A fundamental problem in modeling large molecular systems with molecular dynamics (MD) simulations is the underlying N-body problem of computing the interactions between all pairs of N atoms. The simplest algorithm to compute pair-wise atomic interactions scales in runtime {cal O}(N^2), making it impractical for interesting biomolecular systems, which can contain millions of atoms. Recently, several algorithms have become available that solve the N-body problem by computing the effects of all pair-wise interactions while scaling in runtime less than {cal O}(N^2). One algorithm, which scales {cal O}(N) for a uniform distribution of particles, is called the Greengard-Rokhlin Fast Multipole Algorithm (FMA). This work describes an FMA-like algorithm called the Molecular Dynamics Multipole Algorithm (MDMA). The algorithm contains several features that are new to N-body algorithms. MDMA uses new, efficient series expansion equations to compute general 1/r^{n } potentials to arbitrary accuracy. In particular, the 1/r Coulomb potential and the 1/r^6 portion of the Lennard-Jones potential are implemented. The new equations are based on multivariate Taylor series expansions. In addition, MDMA uses a cell-to-cell interaction region of cells that is closely tied to worst case error bounds. The worst case error bounds for MDMA are derived in this work also. These bounds apply to other multipole algorithms as well. Several implementation enhancements are described which apply to MDMA as well as other N-body algorithms such as FMA and tree codes. The mathematics of the cell -to-cell interactions are converted to the Fourier domain for reduced operation count and faster computation. A relative indexing scheme was devised to locate cells in the interaction region which allows efficient pre-computation of redundant information and prestorage of much of the cell-to-cell interaction. Also, MDMA was integrated into the MD program SIgMA to demonstrate the performance of the program over

  10. Measurement of fibrin concentration by fast field-cycling NMR.

    PubMed

    Broche, Lionel M; Ismail, Saadiya R; Booth, Nuala A; Lurie, David J

    2012-05-01

    The relaxation of (1)H nuclei due to their interaction with quadrupolar (14)N nuclei in gel structures is measured using fast field-cycling NMR. This phenomenon called quadrupolar dips has been reported in different (1)H-(14)N bond-rich species. In this study, we have studied quadrupolar dips in fibrin, an insoluble protein that is the core matrix of thrombi. Fibrin was formed by the addition of thrombin to fibrinogen in 0.2% agarose gel. T(1)-dispersion curves were measured using fast field-cycling NMR relaxometry, over the field range of 1.5-3.5 MHz (proton Larmor frequency), and were analyzed using a curve-fitting algorithm. A linear increase of signal amplitude with increasing fibrin concentration was observed. This agrees with the current theory that predicts a linear relationship of signal amplitude with the concentration of contributing (14)N spins in the sample. Interestingly, fibrin formation gave rise to the signal, regardless of crosslinking induced by the transglutaminase factor XIIIa. To investigate the effect of proteins that might be trapped in the thrombi in vivo, the plasma protein albumin was added to the fibrin gel, and an increase in the quadrupolar signal amplitude was observed. This study can potentially be useful for thrombi classification by fast field-cycling MRI techniques.

  11. A method for calling copy number polymorphism using haplotypes

    PubMed Central

    Ho Jang, Gun; Christie, Jason D.; Feng, Rui

    2013-01-01

    Single nucleotide polymorphism (SNP) and copy number variation (CNV) are both widespread characteristic of the human genome, but are often called separately on common genotyping platforms. To capture integrated SNP and CNV information, methods have been developed for calling allelic specific copy numbers or so called copy number polymorphism (CNP), using limited inter-marker correlation. In this paper, we proposed a haplotype-based maximum likelihood method to call CNP, which takes advantage of the valuable multi-locus linkage disequilibrium (LD) information in the population. We also developed a computationally efficient algorithm to estimate haplotype frequencies and optimize individual CNP calls iteratively, even at presence of missing data. Through simulations, we demonstrated our model is more sensitive and accurate in detecting various CNV regions, compared with commonly-used CNV calling methods including PennCNV, another hidden Markov model (HMM) using CNP, a scan statistic, segCNV, and cnvHap. Our method often performs better in the regions with higher LD, in longer CNV regions, and in common CNV than the opposite. We implemented our method on the genotypes of 90 HapMap CEU samples and 23 patients with acute lung injury (ALI). For each ALI patient the genotyping was performed twice. The CNPs from our method show good consistency and accuracy comparable to others. PMID:24069028

  12. Function of loud calls in wild bonobos.

    PubMed

    White, Frances; Waller, Michel; Boose, Klaree; Merrill, Michelle; Wood, Kimberley

    2015-07-20

    Under the social origins hypothesis, human language is thought to have evolved within the framework of non-human primate social contexts and relationships. Our two closest relatives, chimpanzees and bonobos, however, have very different social relationships and this may be reflected in their use of loud calls. Much of loud calling in the male-bonded and aggressive chimpanzee functions for male alliance formation and intercommunity aggression. Bonobos, however, are female bonded and less aggressive and little is known on the use and function of their loud calls. Data on frequencies, context, and locations of vocalizations were collected for wild bonobos, Pan paniscus, at the Lomako Forest study site in the Democratic Republic of the Congo from 1983 to 2009. Both males and females participated in loud calls used for inter-party communication. Calling and response rates by both males and females were higher during party fusion than party fission and were common at evening nesting. The distribution of loud calls within the community range of loud calls was not random with males calling significantly more towards the periphery of the range and females calling significantly more in central areas. Calling and party fission were common at food patches. Responses were more frequent for female calls than for male calls. Calling, followed by fusion, was more frequent when a small party called from a large patch. We conclude that bonobo females and males loud calls can function in inter-party communication to call others to large food patches. Females call to attract potential allies and males call to attract potential mates. Our results support the social hypothesis of the origin of language because differences in the function and use of loud calls reflect the differing social systems of chimpanzees and bonobos. Bonobo loud calls are important for female communication and function in party coordination and, unlike chimpanzees, are less important in male cooperative aggression

  13. Function of loud calls in wild bonobos.

    PubMed

    White, Frances; Waller, Michel; Boose, Klaree; Merrill, Michelle; Wood, Kimberley

    2015-07-20

    Under the social origins hypothesis, human language is thought to have evolved within the framework of non-human primate social contexts and relationships. Our two closest relatives, chimpanzees and bonobos, however, have very different social relationships and this may be reflected in their use of loud calls. Much of loud calling in the male-bonded and aggressive chimpanzee functions for male alliance formation and intercommunity aggression. Bonobos, however, are female bonded and less aggressive and little is known on the use and function of their loud calls. Data on frequencies, context, and locations of vocalizations were collected for wild bonobos, Pan paniscus, at the Lomako Forest study site in the Democratic Republic of the Congo from 1983 to 2009. Both males and females participated in loud calls used for inter-party communication. Calling and response rates by both males and females were higher during party fusion than party fission and were common at evening nesting. The distribution of loud calls within the community range of loud calls was not random with males calling significantly more towards the periphery of the range and females calling significantly more in central areas. Calling and party fission were common at food patches. Responses were more frequent for female calls than for male calls. Calling, followed by fusion, was more frequent when a small party called from a large patch. We conclude that bonobo females and males loud calls can function in inter-party communication to call others to large food patches. Females call to attract potential allies and males call to attract potential mates. Our results support the social hypothesis of the origin of language because differences in the function and use of loud calls reflect the differing social systems of chimpanzees and bonobos. Bonobo loud calls are important for female communication and function in party coordination and, unlike chimpanzees, are less important in male cooperative aggression.

  14. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  15. Implementation and parallelization of fast matrix multiplication for a fast Legendre transform

    SciTech Connect

    Chen, Wentao

    1993-09-01

    An algorithm was presented by Alpert and Rokhlin for the rapid evaluation of Legendre transforms. The fast algorithm can be expressed as a matrix-vector product followed by a fast cosine transform. Using the Chebyshev expansion to approximate the entries of the matrix and exchanging the order of summations reduces the time complexity of computation from O(n{sup 2}) to O(n log n), where n is the size of the input vector. Our work has been focused on the implementation and the parallelization of the fast algorithm of matrix-vector product. Results have shown the expected performance of the algorithm. Precision problems which arise as n becomes large can be resolved by doubling the precision of the calculation.

  16. Multitask Coupled Logistic Regression and its Fast Implementation for Large Multitask Datasets.

    PubMed

    Gu, Xin; Chung, Fu-Lai; Ishibuchi, Hisao; Wang, Shitong

    2015-09-01

    When facing multitask-learning problems, it is desirable that the learning method could find the correct input-output features and share the commonality among multiple domains and also scale-up for large multitask datasets. We introduce the multitask coupled logistic regression (LR) framework called LR-based multitask classification learning algorithm (MTC-LR), which is a new method for generating each classifier for each task, capable of sharing the commonality among multitask domains. The basic idea of MTC-LR is to use all individual LR based classifiers, each one appropriate for each task domain, but in contrast to other support vector machine (SVM)-based proposals, learning all the parameter vectors of all individual classifiers by using the conjugate gradient method, in a global way and without the use of kernel trick, and being easily extended into its scaled version. We theoretically show that the addition of a new term in the cost function of the set of LRs (that penalizes the diversity among multiple tasks) produces a coupling of multiple tasks that allows MTC-LR to improve the learning performance in a LR way. This finding can make us easily integrate it with a state-of-the-art fast LR algorithm called dual coordinate descent method (CDdual) to develop its fast version MTC-LR-CDdual for large multitask datasets. The proposed algorithm MTC-LR-CDdual is also theoretically analyzed. Our experimental results on artificial and real-datasets indicate the effectiveness of the proposed algorithm MTC-LR-CDdual in classification accuracy, speed, and robustness. PMID:25423663

  17. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320

  18. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  19. Thermostat algorithm for generating target ensembles

    NASA Astrophysics Data System (ADS)

    Bravetti, A.; Tapias, D.

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  20. Call for Papers: Photonics in Switching

    NASA Astrophysics Data System (ADS)

    Wosinska, Lena; Glick, Madeleine

    2006-04-01

    Call for Papers: Photonics in Switching

    Guest Editors:

    Lena Wosinska, Royal Institute of Technology (KTH) / ICT Sweden Madeleine Glick, Intel Research, Cambridge, UK

    Technologies based on DWDM systems allow data transmission with bit rates of Tbit/s on a single fiber. To facilitate this enormous transmission volume, high-capacity and high-speed network nodes become inevitable in the optical network. Wideband switching, WDM switching, optical burst switching (OBS), and optical packet switching (OPS) are promising technologies for harnessing the bandwidth of WDM optical fiber networks in a highly flexible and efficient manner. As a number of key optical component technologies approach maturity, photonics in switching is becoming an increasingly attractive and practical solution for the next-generation of optical networks. The scope of this special issue is focused on the technology and architecture of optical switching nodes, including the architectural and algorithmic aspects of high-speed optical networks.

    Scope of Submission

    The scope of the papers includes, but is not limited to, the following topics:
    • WDM node architectures
    • Novel device technologies enabling photonics in switching, such as optical switch fabrics, optical memory, and wavelength conversion
    • Routing protocols
    • WDM switching and routing
    • Quality of service
    • Performance measurement and evaluation
    • Next-generation optical networks: architecture, signaling, and control
    • Traffic measurement and field trials
    • Optical burst and packet switching
    • OBS/OPS node architectures
    • Burst/Packet scheduling and routing algorithms
    • Contention resolution/avoidance strategies
    • Services and applications for OBS/OPS (e.g., grid networks, storage-area networks, etc.)
    • Burst assembly and ingress traffic shaping

  1. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  2. Fast Lossless Compression of Multispectral-Image Data

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew

    2006-01-01

    An algorithm that effects fast lossless compression of multispectral-image data is based on low-complexity, proven adaptive-filtering algorithms. This algorithm is intended for use in compressing multispectral-image data aboard spacecraft for transmission to Earth stations. Variants of this algorithm could be useful for lossless compression of three-dimensional medical imagery and, perhaps, for compressing image data in general.

  3. FAST GYROSYNCHROTRON CODES

    SciTech Connect

    Fleishman, Gregory D.; Kuznetsov, Alexey A.

    2010-10-01

    Radiation produced by charged particles gyrating in a magnetic field is highly significant in the astrophysics context. Persistently increasing resolution of astrophysical observations calls for corresponding three-dimensional modeling of the radiation. However, available exact equations are prohibitively slow in computing a comprehensive table of high-resolution models required for many practical applications. To remedy this situation, we develop approximate gyrosynchrotron (GS) codes capable of quickly calculating the GS emission (in non-quantum regime) from both isotropic and anisotropic electron distributions in non-relativistic, mildly relativistic, and ultrarelativistic energy domains applicable throughout a broad range of source parameters including dense or tenuous plasmas and weak or strong magnetic fields. The computation time is reduced by several orders of magnitude compared with the exact GS algorithm. The new algorithm performance can gradually be adjusted to the user's needs depending on whether precision or computation speed is to be optimized for a given model. The codes are made available for users as a supplement to this paper.

  4. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  5. FAST: FAST Analysis of Sequences Toolbox

    PubMed Central

    Lawrence, Travis J.; Kauffman, Kyle T.; Amrine, Katherine C. H.; Carper, Dana L.; Lee, Raymond S.; Becich, Peter J.; Canales, Claudia J.; Ardell, David H.

    2015-01-01

    FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought. PMID:26042145

  6. FAST: FAST Analysis of Sequences Toolbox.

    PubMed

    Lawrence, Travis J; Kauffman, Kyle T; Amrine, Katherine C H; Carper, Dana L; Lee, Raymond S; Becich, Peter J; Canales, Claudia J; Ardell, David H

    2015-01-01

    FAST (FAST Analysis of Sequences Toolbox) provides simple, powerful open source command-line tools to filter, transform, annotate and analyze biological sequence data. Modeled after the GNU (GNU's Not Unix) Textutils such as grep, cut, and tr, FAST tools such as fasgrep, fascut, and fastr make it easy to rapidly prototype expressive bioinformatic workflows in a compact and generic command vocabulary. Compact combinatorial encoding of data workflows with FAST commands can simplify the documentation and reproducibility of bioinformatic protocols, supporting better transparency in biological data science. Interface self-consistency and conformity with conventions of GNU, Matlab, Perl, BioPerl, R, and GenBank help make FAST easy and rewarding to learn. FAST automates numerical, taxonomic, and text-based sorting, selection and transformation of sequence records and alignment sites based on content, index ranges, descriptive tags, annotated features, and in-line calculated analytics, including composition and codon usage. Automated content- and feature-based extraction of sites and support for molecular population genetic statistics make FAST useful for molecular evolutionary analysis. FAST is portable, easy to install and secure thanks to the relative maturity of its Perl and BioPerl foundations, with stable releases posted to CPAN. Development as well as a publicly accessible Cookbook and Wiki are available on the FAST GitHub repository at https://github.com/tlawrence3/FAST. The default data exchange format in FAST is Multi-FastA (specifically, a restriction of BioPerl FastA format). Sanger and Illumina 1.8+ FastQ formatted files are also supported. FAST makes it easier for non-programmer biologists to interactively investigate and control biological data at the speed of thought.

  7. 47 CFR 2.302 - Call signs.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... citations affecting § 2.302, see the List of CFR Sections Affected in the Finding Aids section of this... U.S. call sign allocations listed below, call sign blocks AAA through AEZ and ALA through ALZ...

  8. Potential Paradigms and Possible Problems for CALL.

    ERIC Educational Resources Information Center

    Phillips, Martin

    1987-01-01

    Describes three models of CALL (computer assisted language learning) activity--games, the expert system, and the prosthetic approaches. A case is made for CALL development within a more instrumental view of the role of computers. (Author/CB)

  9. 76 FR 17934 - Infrastructure Protection Data Call

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-31

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HOMELAND SECURITY Infrastructure Protection Data Call AGENCY: National Protection and Programs Directorate, DHS...: Infrastructure Protection Data Call. OMB Number: 1670-NEW. Frequency: On occasion. Affected Public:...

  10. Adaptive call admission control and resource allocation in multi server wireless/cellular network

    NASA Astrophysics Data System (ADS)

    Jain, Madhu; Mittal, Ragini

    2016-11-01

    The ever increasing demand of the subscribers has put pressure on the capacity of wireless networks around the world. To utilize the scare resources, in the present paper we propose an optimal allocation scheme for an integrated wireless/cellular model with handoff priority and handoff guarantee services. The suggested algorithm optimally allocates the resources in each cell and dynamically adjust threshold to control the admission. To give the priority to handoff calls over the new calls, the provision of guard channels and subrating scheme is taken into consideration. The handoff voice call may balk and renege from the system while waiting in the buffer. An iterative algorithm is implemented to generate the arrival rate of the handoff calls in each cell. Various performance indices are established in term of steady state probabilities. The sensitivity analysis has also been carried out to examine the tractability of algorithms and to explore the effects of system descriptors on the performance indices.

  11. FastME 2.0: A Comprehensive, Accurate, and Fast Distance-Based Phylogeny Inference Program.

    PubMed

    Lefort, Vincent; Desper, Richard; Gascuel, Olivier

    2015-10-01

    FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/).

  12. FastME 2.0: A Comprehensive, Accurate, and Fast Distance-Based Phylogeny Inference Program

    PubMed Central

    Lefort, Vincent; Desper, Richard; Gascuel, Olivier

    2015-01-01

    FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/). PMID:26130081

  13. FastME 2.0: A Comprehensive, Accurate, and Fast Distance-Based Phylogeny Inference Program.

    PubMed

    Lefort, Vincent; Desper, Richard; Gascuel, Olivier

    2015-10-01

    FastME provides distance algorithms to infer phylogenies. FastME is based on balanced minimum evolution, which is the very principle of Neighbor Joining (NJ). FastME improves over NJ by performing topological moves using fast, sophisticated algorithms. The first version of FastME only included Nearest Neighbor Interchange. The new 2.0 version also includes Subtree Pruning and Regrafting, while remaining as fast as NJ and providing a number of facilities: Distance estimation for DNA and proteins with various models and options, bootstrapping, and parallel computations. FastME is available using several interfaces: Command-line (to be integrated in pipelines), PHYLIP-like, and a Web server (http://www.atgc-montpellier.fr/fastme/). PMID:26130081

  14. Misty Mountain clustering: application to fast unsupervised flow cytometry gating

    PubMed Central

    2010-01-01

    Background There are many important clustering questions in computational biology for which no satisfactory method exists. Automated clustering algorithms, when applied to large, multidimensional datasets, such as flow cytometry data, prove unsatisfactory in terms of speed, problems with local minima or cluster shape bias. Model-based approaches are restricted by the assumptions of the fitting functions. Furthermore, model based clustering requires serial clustering for all cluster numbers within a user defined interval. The final cluster number is then selected by various criteria. These supervised serial clustering methods are time consuming and frequently different criteria result in different optimal cluster numbers. Various unsupervised heuristic approaches that have been developed such as affinity propagation are too expensive to be applied to datasets on the order of 106 points that are often generated by high throughput experiments. Results To circumvent these limitations, we developed a new, unsupervised density contour clustering algorithm, called Misty Mountain, that is based on percolation theory and that efficiently analyzes large data sets. The approach can be envisioned as a progressive top-down removal of clouds covering a data histogram relief map to identify clusters by the appearance of statistically distinct peaks and ridges. This is a parallel clustering method that finds every cluster after analyzing only once the cross sections of the histogram. The overall run time for the composite steps of the algorithm increases linearly by the number of data points. The clustering of 106 data points in 2D data space takes place within about 15 seconds on a standard laptop PC. Comparison of the performance of this algorithm with other state of the art automated flow cytometry gating methods indicate that Misty Mountain provides substantial improvements in both run time and in the accuracy of cluster assignment. Conclusions Misty Mountain is fast, unbiased

  15. 47 CFR 2.302 - Call signs.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL FREQUENCY ALLOCATIONS AND RADIO TREATY MATTERS; GENERAL.... The table which follows indicates the composition and blocks of international call signs available for... U.S. call sign allocations listed below, call sign blocks AAA through AEZ and ALA through ALZ...

  16. Bird calls: their potential for behavioral neurobiology.

    PubMed

    Marler, Peter

    2004-06-01

    Birdsongs are always part of larger set of sound signals. Every bird uses a repertoire of calls for communication. Calls are shorter and simpler than songs, with a much larger range of functions. Whereas songs are specialized for application in reproduction and territoriality, calls also serve such functions as signaling about food, maintaining social cohesion, contact calls, synchronizing and coordinating flight, and the resolution of aggressive and sexual conflicts. Alarm calls of various kinds are a major component, including distress, mobbing, and hawk alarm calls. Call repertoires vary greatly in size, up to 20 or so distinct call types. Rough estimates for songbirds range between 5 and 10, but some birds, especially galliforms, may have twice as many. Call usage is often sexually dimorphic and commonly varies seasonally and with physiological state. Most calls appear to be innate, but more and more examples of developmental plasticity in bird calls are emerging. Some display well-defined local dialects. A case is made for the value to avian behavioral neurobiology of including bird calls in studies of the psychophysics and sensory physiology of signal perception. They may also help to extend the range of neurobiological investigations of the song system to include circuitry controlling such functionally related behaviors as aggression and reproduction.

  17. Bird calls: their potential for behavioral neurobiology.

    PubMed

    Marler, Peter

    2004-06-01

    Birdsongs are always part of larger set of sound signals. Every bird uses a repertoire of calls for communication. Calls are shorter and simpler than songs, with a much larger range of functions. Whereas songs are specialized for application in reproduction and territoriality, calls also serve such functions as signaling about food, maintaining social cohesion, contact calls, synchronizing and coordinating flight, and the resolution of aggressive and sexual conflicts. Alarm calls of various kinds are a major component, including distress, mobbing, and hawk alarm calls. Call repertoires vary greatly in size, up to 20 or so distinct call types. Rough estimates for songbirds range between 5 and 10, but some birds, especially galliforms, may have twice as many. Call usage is often sexually dimorphic and commonly varies seasonally and with physiological state. Most calls appear to be innate, but more and more examples of developmental plasticity in bird calls are emerging. Some display well-defined local dialects. A case is made for the value to avian behavioral neurobiology of including bird calls in studies of the psychophysics and sensory physiology of signal perception. They may also help to extend the range of neurobiological investigations of the song system to include circuitry controlling such functionally related behaviors as aggression and reproduction. PMID:15313768

  18. 47 CFR 2.302 - Call signs.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... citations affecting § 2.302, see the List of CFR Sections Affected in the Finding Aids section of this... 47 Telecommunication 1 2011-10-01 2011-10-01 false Call signs. 2.302 Section 2.302... RULES AND REGULATIONS Call Signs and Other Forms of Identifying Radio Transmissions § 2.302 Call...

  19. 47 CFR 2.302 - Call signs.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... citations affecting § 2.302, see the List of CFR Sections Affected, which appears in the Finding Aids... 47 Telecommunication 1 2013-10-01 2013-10-01 false Call signs. 2.302 Section 2.302... RULES AND REGULATIONS Call Signs and Other Forms of Identifying Radio Transmissions § 2.302 Call...

  20. 47 CFR 2.302 - Call signs.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... citations affecting § 2.302, see the List of CFR Sections Affected, which appears in the Finding Aids... 47 Telecommunication 1 2014-10-01 2014-10-01 false Call signs. 2.302 Section 2.302... RULES AND REGULATIONS Call Signs and Other Forms of Identifying Radio Transmissions § 2.302 Call...

  1. Calling in Work: Secular or Sacred?

    ERIC Educational Resources Information Center

    Steger, Michael F.; Pickering, N. K.; Shin, J. Y.; Dik, B. J.

    2010-01-01

    Recent scholarship indicates that people who view their work as a calling are more satisfied with their work and their lives. Historically, calling has been regarded as a religious experience, although modern researchers frequently have adopted a more expansive and secular conceptualization of calling, emphasizing meaning and personal fulfillment…

  2. Best practices for evaluating single nucleotide variant calling methods for microbial genomics

    PubMed Central

    Olson, Nathan D.; Lund, Steven P.; Colman, Rebecca E.; Foster, Jeffrey T.; Sahl, Jason W.; Schupp, James M.; Keim, Paul; Morrow, Jayne B.; Salit, Marc L.; Zook, Justin M.

    2015-01-01

    Innovations in sequencing technologies have allowed biologists to make incredible advances in understanding biological systems. As experience grows, researchers increasingly recognize that analyzing the wealth of data provided by these new sequencing platforms requires careful attention to detail for robust results. Thus far, much of the scientific Communit’s focus for use in bacterial genomics has been on evaluating genome assembly algorithms and rigorously validating assembly program performance. Missing, however, is a focus on critical evaluation of variant callers for these genomes. Variant calling is essential for comparative genomics as it yields insights into nucleotide-level organismal differences. Variant calling is a multistep process with a host of potential error sources that may lead to incorrect variant calls. Identifying and resolving these incorrect calls is critical for bacterial genomics to advance. The goal of this review is to provide guidance on validating algorithms and pipelines used in variant calling for bacterial genomics. First, we will provide an overview of the variant calling procedures and the potential sources of error associated with the methods. We will then identify appropriate datasets for use in evaluating algorithms and describe statistical methods for evaluating algorithm performance. As variant calling moves from basic research to the applied setting, standardized methods for performance evaluation and reporting are required; it is our hope that this review provides the groundwork for the development of these standards. PMID:26217378

  3. CAVITY CONTROL ALGORITHM

    SciTech Connect

    Tomasz Plawski, J. Hovater

    2010-09-01

    A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

  4. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    PubMed

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  5. Fast and Efficient Approach in Surface Wave Analysis

    NASA Astrophysics Data System (ADS)

    Kanli, A. I.

    2010-12-01

    Fast and Efficient Approach in Surface Wave Analysis Ali Ismet KANLI Istanbul University, Engineering Faculty, Department of Geophysical Engineering, 34320, Avcilar Campus, Istanbul-Turkey, E-mail: kanli@istanbul.edu.tr Abstract: A two-step surface wave analysis method is proposed including both the MASW (Multi-channel Analysis of Surface Waves) and Micro-tremor based techniques. This is an integrated approach and the MASW survey data are gathered to obtain the shear wave velocity-depth information up to at least 30 meters by using a special type active seismic source called as SR-II or Kangaroo. In the second step, the microtremor data which are based on surface waves from seismic noise at each site are used to determine the shear-wave velocity-depth profiles. In the second step of the process, the multichannel analysis of surface waves data are given as constraints in the microtremor inversion process. This proposed algorithm allows us to calculate shear wave velocity-depth information with all geotechnical parameters from near surface to bedrock depths very fast and efficiently.

  6. A Fast and Efficient Version of the TwO-Moment Aerosol Sectional (TOMAS) Global Aerosol Microphysics Model

    NASA Technical Reports Server (NTRS)

    Lee, Yunha; Adams, P. J.

    2012-01-01

    This study develops more computationally efficient versions of the TwO-Moment Aerosol Sectional (TOMAS) microphysics algorithms, collectively called Fast TOMAS. Several methods for speeding up the algorithm were attempted, but only reducing the number of size sections was adopted. Fast TOMAS models, coupled to the GISS GCM II-prime, require a new coagulation algorithm with less restrictive size resolution assumptions but only minor changes in other processes. Fast TOMAS models have been evaluated in a box model against analytical solutions of coagulation and condensation and in a 3-D model against the original TOMAS (TOMAS-30) model. Condensation and coagulation in the Fast TOMAS models agree well with the analytical solution but show slightly more bias than the TOMAS-30 box model. In the 3-D model, errors resulting from decreased size resolution in each process (i.e., emissions, cloud processing wet deposition, microphysics) are quantified in a series of model sensitivity simulations. Errors resulting from lower size resolution in condensation and coagulation, defined as the microphysics error, affect number and mass concentrations by only a few percent. The microphysics error in CN70CN100 (number concentrations of particles larger than 70100 nm diameter), proxies for cloud condensation nuclei, range from 5 to 5 in most regions. The largest errors are associated with decreasing the size resolution in the cloud processing wet deposition calculations, defined as cloud-processing error, and range from 20 to 15 in most regions for CN70CN100 concentrations. Overall, the Fast TOMAS models increase the computational speed by 2 to 3 times with only small numerical errors stemming from condensation and coagulation calculations when compared to TOMAS-30. The faster versions of the TOMAS model allow for the longer, multi-year simulations required to assess aerosol effects on cloud lifetime and precipitation.

  7. Bonobos Extract Meaning from Call Sequences

    PubMed Central

    Clay, Zanna; Zuberbühler, Klaus

    2011-01-01

    Studies on language-trained bonobos have revealed their remarkable abilities in representational and communication tasks. Surprisingly, however, corresponding research into their natural communication has largely been neglected. We address this issue with a first playback study on the natural vocal behaviour of bonobos. Bonobos produce five acoustically distinct call types when finding food, which they regularly mix together into longer call sequences. We found that individual call types were relatively poor indicators of food quality, while context specificity was much greater at the call sequence level. We therefore investigated whether receivers could extract meaning about the quality of food encountered by the caller by integrating across different call sequences. We first trained four captive individuals to find two types of foods, kiwi (preferred) and apples (less preferred) at two different locations. We then conducted naturalistic playback experiments during which we broadcasted sequences of four calls, originally produced by a familiar individual responding to either kiwi or apples. All sequences contained the same number of calls but varied in the composition of call types. Following playbacks, we found that subjects devoted significantly more search effort to the field indicated by the call sequence. Rather than attending to individual calls, bonobos attended to the entire sequences to make inferences about the food encountered by a caller. These results provide the first empirical evidence that bonobos are able to extract information about external events by attending to vocal sequences of other individuals and highlight the importance of call combinations in their natural communication system. PMID:21556149

  8. Integrated literature review of postdischarge telephone calls.

    PubMed

    Bahr, Sarah J; Solverson, Susan; Schlidt, Andrea; Hack, Deborah; Smith, Jeri Lynn; Ryan, Polly

    2014-01-01

    This systematic review of the literature assessed the impact of a postdischarge telephone call on patient outcomes. Nineteen articles met inclusion criteria. Data were extracted and an evidence table was developed. The content, timing, and professional placing the call varied across studies. Study strength was low and findings were inconsistent. Measures varied across studies, many sample sizes were small, and studies differed by patient population. Evidence is inconclusive for use of phone calls to decrease readmission, emergency department use, patient satisfaction, scheduled and unscheduled follow-up, and physical and emotional well-being. Among these studies, there was limited support for medication-focused calls by pharmacists but no support for decreasing readmission. Health care providers benefited from feedback but did not need to place the call to realize this benefit. Inpatient nurses were unable to manage the volume of calls. There was no standardized approach to the call, training, or documentation requirements.

  9. Integrated literature review of postdischarge telephone calls.

    PubMed

    Bahr, Sarah J; Solverson, Susan; Schlidt, Andrea; Hack, Deborah; Smith, Jeri Lynn; Ryan, Polly

    2014-01-01

    This systematic review of the literature assessed the impact of a postdischarge telephone call on patient outcomes. Nineteen articles met inclusion criteria. Data were extracted and an evidence table was developed. The content, timing, and professional placing the call varied across studies. Study strength was low and findings were inconsistent. Measures varied across studies, many sample sizes were small, and studies differed by patient population. Evidence is inconclusive for use of phone calls to decrease readmission, emergency department use, patient satisfaction, scheduled and unscheduled follow-up, and physical and emotional well-being. Among these studies, there was limited support for medication-focused calls by pharmacists but no support for decreasing readmission. Health care providers benefited from feedback but did not need to place the call to realize this benefit. Inpatient nurses were unable to manage the volume of calls. There was no standardized approach to the call, training, or documentation requirements. PMID:23833254

  10. Social Calls Predict Foraging Success in Big Brown Bats

    PubMed Central

    Wright, Genevieve Spanjer; Chiu, Chen; Xian, Wei; Wilkinson, Gerald S.; Moss, Cynthia F.

    2014-01-01

    Animals foraging in the dark are simultaneously engaged in prey pursuit, collision avoidance and interactions with conspecifics, making efficient, non-visual communication essential. A variety of birds and mammals emit food-associated calls that inform, attract, or repel conspecifics [e.g., 1]. Big brown bats (Eptesicus fuscus) are insectivorous aerial hawkers that may forage near conspecifics and are known to emit social calls [e.g., 2, 3, 4, 5]. Calls recorded in a foraging setting might attract [e.g., 6] or repel conspecifics [7] and could denote territoriality or food-claiming. Here, we provide evidence that a social call emitted only by male bats, exclusively in a foraging context [5], the “frequency-modulated bout” (FMB), is used to claim food and is individually distinct. Bats were studied individually and in pairs in a flight room equipped with synchronized high-speed stereo video and audio recording equipment, while sex and experience with a foraging task were experimentally manipulated. Male bats emitting the FMB showed greater success in capturing prey. Following FMB emission, inter-bat distance, diverging flight, and the other bat’s distance to the prey each increased. These findings highlight the importance and utility of vocal communication for a nocturnal animal mediating interactions with conspecifics in a fast-paced foraging setting. PMID:24684936

  11. Social calls predict foraging success in big brown bats.

    PubMed

    Wright, Genevieve Spanjer; Chiu, Chen; Xian, Wei; Wilkinson, Gerald S; Moss, Cynthia F

    2014-04-14

    Animals foraging in the dark are engaged simultaneously in prey pursuit, collision avoidance, and interactions with conspecifics, making efficient nonvisual communication essential. A variety of birds and mammals emit food-associated calls that inform, attract, or repel conspecifics (e.g.,). Big brown bats (Eptesicus fuscus) are insectivorous aerial hawkers that may forage near conspecifics and are known to emit social calls (e.g.,). Calls recorded in a foraging setting might attract (e.g.,) or repel conspecifics and could denote territoriality or food claiming. Here, we provide evidence that the "frequency-modulated bout" (FMB), a social call emitted only by male bats (exclusively in a foraging context), is used to claim food and is individually distinct. Bats were studied individually and in pairs in a flight room equipped with synchronized high-speed stereo video and audio recording equipment while sex and experience with a foraging task were experimentally manipulated. Male bats emitting the FMB showed greater success in capturing prey. Following FMB emission, interbat distance, diverging flight, and the other bat's distance to the prey each increased. These findings highlight the importance and utility of vocal communication for a nocturnal animal mediating interactions with conspecifics in a fast-paced foraging setting. PMID:24684936

  12. Linear Bregman algorithm implemented in parallel GPU

    NASA Astrophysics Data System (ADS)

    Li, Pengyan; Ke, Jue; Sui, Dong; Wei, Ping

    2015-08-01

    At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people's fast growing demand to information technology.

  13. Object-Oriented Fast Multipole Simulation: Magnetic Colloids

    NASA Astrophysics Data System (ADS)

    Visscher, Pieter; Günal, Yüksel

    1997-08-01

    In simulating a system of N particles, if the interaction is long-ranged all pair interactions must be calculated, requiring CPU time of order N^2. Recently-developed ``fast multipole'' methods (FMM) can reduce this time to order N, at the cost of considerable programming complexity. We have developed an object-oriented approach which uses similar ideas but is conceptually much simpler. The system is represented by a hierarchical tree whose root is the entire system and whose lowest nodes are the particles. The entire calculation of the particle interactions consists of a single call to a recursive function CalculateInteractions(A,B) with A=B=root, which uses a simple opening-angle criterion to choose between multipole expansion and calling itself (subdividing A and B.) The resulting algorithm is essentially equivalent to the FMM, but the choice of when to subdivide (which is laboriously hard-wired in FMM) is made automatically. We will discuss the implementation of periodic BCs and the application of the method to continuum systems (cylindrical magnetic particles).

  14. Improved algorithm for calculating the Chandrasekhar function

    NASA Astrophysics Data System (ADS)

    Jablonski, A.

    2013-02-01

    algorithms by selecting ranges of the argument omega in which the performance is the fastest. Reasons for the new version: Some of the theoretical models describing electron transport in condensed matter need a source of the Chandrasekhar H function values with an accuracy of at least 10 decimal places. Additionally, calculations of this function should be as fast as possible since frequent calls to a subroutine providing this function are made (e.g., numerical evaluation of a double integral with a complicated integrand containing the H function). Both conditions were satisfied in the algorithm previously published [1]. However, it has been found that a proper selection of the quadrature in an integral representation of the Chandrasekhar function may considerably decrease the running time. By suitable selection of the number of abscissas in Gauss-Legendre quadrature, the execution time was decreased by a factor of more than 20. Simultaneously, the accuracy of results has not been affected. Summary of revisions: (1) As in previous work [1], two integral representations of the Chandrasekhar function, H(x,omega), were considered: the expression published by Dudarev and Whelan [2] and the expression published by Davidović et al. [3]. The algorithms implementing these representations were designated A and B, respectively. All integrals in these implementations were previously calculated using Romberg quadrature. It has been found, however, that the use of Gauss-Legendre quadrature considerably improved the performance of both algorithms. Two conditions have to be satisfied. (i) The number of abscissas, N, has to be rather large, and (ii) the abscissas and corresponding weights should be determined with accuracy as high as possible. The abscissas and weights are available for N=16, 20, 24, 32, 40, 48, 64, 80, and 96 with accuracy of 20 decimal places [4], and all these values were introduced into a new procedure GAUSS replacing procedure ROMBERG. Due to the fact that the

  15. Problem solving with genetic algorithms and Splicer

    NASA Technical Reports Server (NTRS)

    Bayer, Steven E.; Wang, Lui

    1991-01-01

    Genetic algorithms are highly parallel, adaptive search procedures (i.e., problem-solving methods) loosely based on the processes of population genetics and Darwinian survival of the fittest. Genetic algorithms have proven useful in domains where other optimization techniques perform poorly. The main purpose of the paper is to discuss a NASA-sponsored software development project to develop a general-purpose tool for using genetic algorithms. The tool, called Splicer, can be used to solve a wide variety of optimization problems and is currently available from NASA and COSMIC. This discussion is preceded by an introduction to basic genetic algorithm concepts and a discussion of genetic algorithm applications.

  16. Predictive model for determining the quality of a call

    NASA Astrophysics Data System (ADS)

    Voznak, M.; Rozhon, J.; Partila, P.; Safarik, J.; Mikulec, M.; Mehic, M.

    2014-05-01

    In this paper the predictive model for speech quality estimation is described. This model allows its user to gain the information about the speech quality in VoIP networks without the need of performing the actual call and the consecutive time consuming sound file evaluation. This rapidly increases usability of the speech quality measurement especially in high load networks, where the actual processing of all calls is rendered difficult or even impossible. This model can reach its results that are highly conformant with the PESQ algorithm only based on the network state parameters that are easily obtainable by the commonly used software tools. Experiments were carried out to investigate whether different languages (English, Czech) have an effect on perceived voice quality for the same network conditions and the language factor was incorporated directly into the model.

  17. Advancing-Front Algorithm For Delaunay Triangulation

    NASA Technical Reports Server (NTRS)

    Merriam, Marshal L.

    1993-01-01

    Efficient algorithm performs Delaunay triangulation to generate unstructured grids for use in computing two-dimensional flows. Once grid generated, one can optionally call upon additional subalgorithm that removes diagonal lines from quadrilateral cells nearly rectangular. Resulting approximately rectangular grid reduces cost per iteration of flow-computing algorithm.

  18. Adaptive line enhancers for fast acquisition

    NASA Technical Reports Server (NTRS)

    Yeh, H.-G.; Nguyen, T. M.

    1994-01-01

    Three adaptive line enhancer (ALE) algorithms and architectures - namely, conventional ALE, ALE with double filtering, and ALE with coherent accumulation - are investigated for fast carrier acquisition in the time domain. The advantages of these algorithms are their simplicity, flexibility, robustness, and applicability to general situations including the Earth-to-space uplink carrier acquisition and tracking of the spacecraft. In the acquisition mode, these algorithms act as bandpass filters; hence, the carrier-to-noise ratio (CNR) is improved for fast acquisition. In the tracking mode, these algorithms simply act as lowpass filters to improve signal-to-noise ratio; hence, better tracking performance is obtained. It is not necessary to have a priori knowledge of the received signal parameters, such as CNR, Doppler, and carrier sweeping rate. The implementation of these algorithms is in the time domain (as opposed to the frequency domain, such as the fast Fourier transform (FFT)). The carrier frequency estimation can be updated in real time at each time sample (as opposed to the batch processing of the FFT). The carrier frequency to be acquired can be time varying, and the noise can be non-Gaussian, nonstationary, and colored.

  19. Classification and automatic transcription of primate calls.

    PubMed

    Versteegh, Maarten; Kuhn, Jeremy; Synnaeve, Gabriel; Ravaux, Lucie; Chemla, Emmanuel; Cäsar, Cristiane; Fuller, James; Murphy, Derek; Schel, Anne; Dunbar, Ewan

    2016-07-01

    This paper reports on an automated and openly available tool for automatic acoustic analysis and transcription of primate calls, which takes raw field recordings and outputs call labels time-aligned with the audio. The system's output predicts a majority of the start times of calls accurately within 200 milliseconds. The tools do not require any manual acoustic analysis or selection of spectral features by the researcher.

  20. Classification and automatic transcription of primate calls.

    PubMed

    Versteegh, Maarten; Kuhn, Jeremy; Synnaeve, Gabriel; Ravaux, Lucie; Chemla, Emmanuel; Cäsar, Cristiane; Fuller, James; Murphy, Derek; Schel, Anne; Dunbar, Ewan

    2016-07-01

    This paper reports on an automated and openly available tool for automatic acoustic analysis and transcription of primate calls, which takes raw field recordings and outputs call labels time-aligned with the audio. The system's output predicts a majority of the start times of calls accurately within 200 milliseconds. The tools do not require any manual acoustic analysis or selection of spectral features by the researcher. PMID:27475207