Science.gov

Sample records for algorithm called fast

  1. Data-adaptive algorithms for calling alleles in repeat polymorphisms.

    PubMed

    Stoughton, R; Bumgarner, R; Frederick, W J; McIndoe, R A

    1997-01-01

    Data-adaptive algorithms are presented for separating overlapping signatures of heterozygotic allele pairs in electrophoresis data. Application is demonstrated for human microsatellite CA-repeat polymorphisms in LiCor 4000 and ABI 373 data. The algorithms allow overlapping alleles to be called correctly in almost every case where a trained observer could do so, and provide a fast automated objective alternative to human reading of the gels. The algorithm also supplies an indication of confidence level which can be used to flag marginal cases for verification by eye, or as input to later stages of statistical analysis. PMID:9059812

  2. Automated DNA Base Pair Calling Algorithm

    1999-07-07

    The procedure solves the problem of calling the DNA base pair sequence from two channel electropherogram separations in an automated fashion. The core of the program involves a peak picking algorithm based upon first, second, and third derivative spectra for each electropherogram channel, signal levels as a function of time, peak spacing, base pair signal to noise sequence patterns, frequency vs ratio of the two channel histograms, and confidence levels generated during the run. Themore » ratios of the two channels at peak centers can be used to accurately and reproducibly determine the base pair sequence. A further enhancement is a novel Gaussian deconvolution used to determine the peak heights used in generating the ratio.« less

  3. A fast meteor detection algorithm

    NASA Astrophysics Data System (ADS)

    Gural, P.

    2016-01-01

    A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.

  4. A hybrid fast Hankel transform algorithm for electromagnetic modeling

    USGS Publications Warehouse

    Anderson, W.L.

    1989-01-01

    A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram (called HYBFHT) written in standard Fortran-77 provides a simple user interface to call either subalgorithm. The hybrid approach is an attempt to combine the best features of the two subalgorithms to minimize the user's coding requirements and to provide fast execution and good accuracy for a large class of electromagnetic problems involving various related Hankel transform sets with multiple arguments. Special cases of Hankel transforms of double-order and double-argument are discussed, where use of HYBFHT is shown to be advantageous for oscillatory kernal functions. -Author

  5. Fast decoding algorithms for coded aperture systems

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    2014-08-01

    Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques.

  6. HYBRID FAST HANKEL TRANSFORM ALGORITHM FOR ELECTROMAGNETIC MODELING

    EPA Science Inventory

    A hybrid fast Hankel transform algorithm has been developed that uses several complementary features of two existing algorithms: Anderson's digital filtering or fast Hankel transform (FHT) algorithm and Chave's quadrature and continued fraction algorithm. A hybrid FHT subprogram ...

  7. A Fast Implementation of the ISOCLUS Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2003-01-01

    Unsupervised clustering is a fundamental building block in numerous image processing applications. One of the most popular and widely used clustering schemes for remote sensing applications is the ISOCLUS algorithm, which is based on the ISODATA method. The algorithm is given a set of n data points in d-dimensional space, an integer k indicating the initial number of clusters, and a number of additional parameters. The general goal is to compute the coordinates of a set of cluster centers in d-space, such that those centers minimize the mean squared distance from each data point to its nearest center. This clustering algorithm is similar to another well-known clustering method, called k-means. One significant feature of ISOCLUS over k-means is that the actual number of clusters reported might be fewer or more than the number supplied as part of the input. The algorithm uses different heuristics to determine whether to merge lor split clusters. As ISOCLUS can run very slowly, particularly on large data sets, there has been a growing .interest in the remote sensing community in computing it efficiently. We have developed a faster implementation of the ISOCLUS algorithm. Our improvement is based on a recent acceleration to the k-means algorithm of Kanungo, et al. They showed that, by using a kd-tree data structure for storing the data, it is possible to reduce the running time of k-means. We have adapted this method for the ISOCLUS algorithm, and we show that it is possible to achieve essentially the same results as ISOCLUS on large data sets, but with significantly lower running times. This adaptation involves computing a number of cluster statistics that are needed for ISOCLUS but not for k-means. Both the k-means and ISOCLUS algorithms are based on iterative schemes, in which nearest neighbors are calculated until some convergence criterion is satisfied. Each iteration requires that the nearest center for each data point be computed. Naively, this requires O

  8. Fast algorithms for transport models

    SciTech Connect

    Manteuffel, T.A.

    1992-12-01

    The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).

  9. Fast ordering algorithm for exact histogram specification.

    PubMed

    Nikolova, Mila; Steidl, Gabriele

    2014-12-01

    This paper provides a fast algorithm to order in a meaningful, strict way the integer gray values in digital (quantized) images. It can be used in any exact histogram specification-based application. Our algorithm relies on the ordering procedure based on the specialized variational approach. This variational method was shown to be superior to all other state-of-the art ordering algorithms in terms of faithful total strict ordering but not in speed. Indeed, the relevant functionals are in general difficult to minimize because their gradient is nearly flat over vast regions. In this paper, we propose a simple and fast fixed point algorithm to minimize these functionals. The fast convergence of our algorithm results from known analytical properties of the model. Our algorithm is equivalent to an iterative nonlinear filtering. Furthermore, we show that a particular form of the variational model gives rise to much faster convergence than other alternative forms. We demonstrate that only a few iterations of this filter yield almost the same pixel ordering as the minimizer. Thus, we apply only few iteration steps to obtain images, whose pixels can be ordered in a strict and faithful way. Numerical experiments confirm that our algorithm outperforms by far its main competitors. PMID:25347881

  10. Fast Fourier Transform algorithm design and tradeoffs

    NASA Technical Reports Server (NTRS)

    Kamin, Ray A., III; Adams, George B., III

    1988-01-01

    The Fast Fourier Transform (FFT) is a mainstay of certain numerical techniques for solving fluid dynamics problems. The Connection Machine CM-2 is the target for an investigation into the design of multidimensional Single Instruction Stream/Multiple Data (SIMD) parallel FFT algorithms for high performance. Critical algorithm design issues are discussed, necessary machine performance measurements are identified and made, and the performance of the developed FFT programs are measured. Fast Fourier Transform programs are compared to the currently best Cray-2 FFT program.

  11. Call admission algorithms in multiservice and multiclass ATM network

    NASA Astrophysics Data System (ADS)

    Hamma, Salima; Hebuterne, Gerard

    2004-09-01

    The introduction of new ATM service categories increases the benefits of ATM, making the technology suitable for a virtually unlimited range of applications. Connection Admission Control (CAC) is defined as the set of actions taken by the network during the call (virtual connection) set-up phase, or during call re-negotiation phase, to determine whether a connection request can be accepted or rejected. Network resources (port bandwidth and buffer space) are reserved to the incoming connection at each switching element traversed, if so required, by the service category. The major focus of this paper is call admission in the context of multi-service, multi-class ATM networks. Several strategies suggesting rules on bandwidth sharing are found in the litterature. This study investigates particularly the Complete Sharing approach. Two service categories are concerned, namely, Constant Bit rate/Deterministic Bit Rate (CBR/DBR) and Variable Bit Rate/Statistical Bit Rate (VBR/SBR). Each service category is represented by a set of call classes corresponding to different bandwidth needs. We propose two algorithms to solve the underlying Markovian system: Product-form and Recursive solutions. A performance study based on the latter algorithm is implemented. We analyze the results of this very sharing strategy and set the not-to-violate limits for a beneficial use of it.

  12. MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    Tensors (also known as mutidimensional arrays or N-way arrays) are used in a variety of applications ranging from chemometrics to psychometrics. We describe four MATLAB classes for tensor manipulations that can be used for fast algorithm prototyping. The tensor class extends the functionality of MATLAB's multidimensional arrays by supporting additional operations such as tensor multiplication. The tensor as matrix class supports the 'matricization' of a tensor, i.e., the conversion of a tensor to a matrix (and vice versa), a commonly used operation in many algorithms. Two additional classes represent tensors stored in decomposed formats: cp tensor and tucker tensor. We descibe all of these classes and then demonstrate their use by showing how to implement several tensor algorithms that have appeared in the literature.

  13. Fast Algorithms for Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Barrett, Anthony; Vatan, Farrokh; Mackey, Ryan

    2005-01-01

    Two improved new methods for automated diagnosis of complex engineering systems involve the use of novel algorithms that are more efficient than prior algorithms used for the same purpose. Both the recently developed algorithms and the prior algorithms in question are instances of model-based diagnosis, which is based on exploring the logical inconsistency between an observation and a description of a system to be diagnosed. As engineering systems grow more complex and increasingly autonomous in their functions, the need for automated diagnosis increases concomitantly. In model-based diagnosis, the function of each component and the interconnections among all the components of the system to be diagnosed (for example, see figure) are represented as a logical system, called the system description (SD). Hence, the expected behavior of the system is the set of logical consequences of the SD. Faulty components lead to inconsistency between the observed behaviors of the system and the SD. The task of finding the faulty components (diagnosis) reduces to finding the components, the abnormalities of which could explain all the inconsistencies. Of course, the meaningful solution should be a minimal set of faulty components (called a minimal diagnosis), because the trivial solution, in which all components are assumed to be faulty, always explains all inconsistencies. Although the prior algorithms in question implement powerful methods of diagnosis, they are not practical because they essentially require exhaustive searches among all possible combinations of faulty components and therefore entail the amounts of computation that grow exponentially with the number of components of the system.

  14. Fast computation algorithms for speckle pattern simulation

    SciTech Connect

    Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru

    2013-11-13

    We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.

  15. A fast DFT algorithm using complex integer transforms

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    Winograd's algorithm for computing the discrete Fourier transform is extended considerably for certain large transform lengths. This is accomplished by performing the cyclic convolution, required by Winograd's method, by a fast transform over certain complex integer fields. This algorithm requires fewer multiplications than either the standard fast Fourier transform or Winograd's more conventional algorithms.

  16. An automatic and fast centerline extraction algorithm for virtual colonoscopy.

    PubMed

    Jiang, Guangxiang; Gu, Lixu

    2005-01-01

    This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406

  17. An enhanced fast scanning algorithm for image segmentation

    NASA Astrophysics Data System (ADS)

    Ismael, Ahmed Naser; Yusof, Yuhanis binti

    2015-12-01

    Segmentation is an essential and important process that separates an image into regions that have similar characteristics or features. This will transform the image for a better image analysis and evaluation. An important benefit of segmentation is the identification of region of interest in a particular image. Various algorithms have been proposed for image segmentation and this includes the Fast Scanning algorithm which has been employed on food, sport and medical images. It scans all pixels in the image and cluster each pixel according to the upper and left neighbor pixels. The clustering process in Fast Scanning algorithm is performed by merging pixels with similar neighbor based on an identified threshold. Such an approach will lead to a weak reliability and shape matching of the produced segments. This paper proposes an adaptive threshold function to be used in the clustering process of the Fast Scanning algorithm. This function used the gray'value in the image's pixels and variance Also, the level of the image that is more the threshold are converted into intensity values between 0 and 1, and other values are converted into intensity values zero. The proposed enhanced Fast Scanning algorithm is realized on images of the public and private transportation in Iraq. Evaluation is later made by comparing the produced images of proposed algorithm and the standard Fast Scanning algorithm. The results showed that proposed algorithm is faster in terms the time from standard fast scanning.

  18. Implementation and analysis of a fast backprojection algorithm

    NASA Astrophysics Data System (ADS)

    Gorham, LeRoy A.; Majumder, Uttam K.; Buxa, Peter; Backues, Mark J.; Lindgren, Andrew C.

    2006-05-01

    The convolution backprojection algorithm is an accurate synthetic aperture radar imaging technique, but it has seen limited use in the radar community due to its high computational costs. Therefore, significant research has been conducted for a fast backprojection algorithm, which surrenders some image quality for increased computational efficiency. This paper describes an implementation of both a standard convolution backprojection algorithm and a fast backprojection algorithm optimized for use on a Linux cluster and a field-programmable gate array (FPGA) based processing system. The performance of the different implementations is compared using synthetic ideal point targets and the SPIE XPatch Backhoe dataset.

  19. Fast algorithm for peptide sequencing by mass spectroscopy.

    PubMed

    Bartels, C

    1990-01-01

    An automatic algorithm for sequencing polypeptides from fast atom bombardment tandem mass spectra is presented.Based on graph theory considerations it finds the most probable sequences, even if the amino acid composition is unknown, by scoring mass differences. The algorithm is fast as the computing time increases by less than the square of the number of amino acids. Pairs of two or three amino acids are proposed to explain the gap if peaks are missing. PMID:24730078

  20. Fast algorithms for computing isogenies between elliptic curves

    NASA Astrophysics Data System (ADS)

    Bostan, A.; Morain, F.; Salvy, B.; Schost, E.

    2008-09-01

    We survey algorithms for computing isogenies between elliptic curves defined over a field of characteristic either 0 or a large prime. We introduce a new algorithm that computes an isogeny of degree ell ( ell different from the characteristic) in time quasi-linear with respect to ell E This is based in particular on fast algorithms for power series expansion of the Weierstrass wp -function and related functions.

  1. Fast algorithms for transport models. Final report

    SciTech Connect

    Manteuffel, T.A.

    1994-10-01

    This project has developed a multigrid in space algorithm for the solution of the S{sub N} equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell {mu}-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE`s. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M)).

  2. Application of fast BLMS algorithm in acoustic echo cancellation

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Li, Nian Q.

    2013-03-01

    The acoustic echo path is usually very long and ranges from several hundreds to few thousands of taps. Frequency domain adaptive filter provides a solution to acoustic echo cancellation by means of resulting a significant reduction in the computational burden. In this paper, fast BLMS (Block Least-Mean-Square) algorithm in frequency domain is realized by using fast FFT technology. The adaptation of filter parameters is actually performed in the frequency domain. The proposed algorithm can ensure convergence with high speed and reduce computational complexity. Simulation results indicate that the algorithm demonstrates good performance for acoustic echo cancellation in communication systems.

  3. A fast SEQUEST cross correlation algorithm.

    PubMed

    Eng, Jimmy K; Fischer, Bernd; Grossmann, Jonas; Maccoss, Michael J

    2008-10-01

    The SEQUEST program was the first and remains one of the most widely used tools for assigning a peptide sequence within a database to a tandem mass spectrum. The cross correlation score is the primary score function implemented within SEQUEST and it is this score that makes the tool particularly sensitive. Unfortunately, this score is computationally expensive to calculate, and thus, to make the score manageable, SEQUEST uses a less sensitive but fast preliminary score and restricts the cross correlation to just the top 500 peptides returned by the preliminary score. Classically, the cross correlation score has been calculated using Fast Fourier Transforms (FFT) to generate the full correlation function. We describe an alternate method of calculating the cross correlation score that does not require FFTs and can be computed efficiently in a fraction of the time. The fast calculation allows all candidate peptides to be scored by the cross correlation function, potentially mitigating the need for the preliminary score, and enables an E-value significance calculation based on the cross correlation score distribution calculated on all candidate peptide sequences obtained from a sequence database. PMID:18774840

  4. Fast search algorithms for computational protein design.

    PubMed

    Traoré, Seydou; Roberts, Kyle E; Allouche, David; Donald, Bruce R; André, Isabelle; Schiex, Thomas; Barbe, Sophie

    2016-05-01

    One of the main challenges in computational protein design (CPD) is the huge size of the protein sequence and conformational space that has to be computationally explored. Recently, we showed that state-of-the-art combinatorial optimization technologies based on Cost Function Network (CFN) processing allow speeding up provable rigid backbone protein design methods by several orders of magnitudes. Building up on this, we improved and injected CFN technology into the well-established CPD package Osprey to allow all Osprey CPD algorithms to benefit from associated speedups. Because Osprey fundamentally relies on the ability of A* to produce conformations in increasing order of energy, we defined new A* strategies combining CFN lower bounds, with new side-chain positioning-based branching scheme. Beyond the speedups obtained in the new A*-CFN combination, this novel branching scheme enables a much faster enumeration of suboptimal sequences, far beyond what is reachable without it. Together with the immediate and important speedups provided by CFN technology, these developments directly benefit to all the algorithms that previously relied on the DEE/ A* combination inside Osprey* and make it possible to solve larger CPD problems with provable algorithms. PMID:26833706

  5. Two fast algorithms of image inpainting

    NASA Astrophysics Data System (ADS)

    He, Yuqing; Hou, Zhengxin; Wang, Chengyou

    2008-03-01

    Digital image inpainting is an interesting new research topic in multimedia computing and image processing since 2000. This talk covers the most recent contributions in digital image inpainting and image completion, as well as concepts in video inpainting. Image inpainting refers to reconstructing the corrupt regions where the data are all destroyed. A primary class of the technique is to build up a Partial Differential Equation (PDE), consider it as a boundary problem, and solve it by some iterative method. The most representative and creative one of the inpainting algorithms is Bertalmio-Sapiro-Caselles-Bellester (BSCB) model. After summarizes the development of image inpainting technique, this paper points the research at the improvement on BSCB model, and proposes two algorithms to solve the two drawbacks of this model. The first is selective adaptive interpolation which develops the traditional adaptive interpolation algorithm by introducing a priority value. Besides much faster than BSCB model, it can improve the inpainting effects. The second takes selective adaptive interpolation as a preprocessing step, reduces the operation time and improves the inpainting quality further.

  6. Quantization Effects and Stabilization of the Fast-Kalman Algorithm

    NASA Astrophysics Data System (ADS)

    Papaodysseus, Constantin; Alexiou, Constantin; Roussopoulos, George; Panagopoulos, Athanasios

    2001-12-01

    The exact and actual cause of the failure of the fast-Kalman algorithm due to the generation and propagation of finite-precision or quantization error is presented. It is demonstrated that out of all the formulas that constitute this fast Recursive Least Squares (RLS) scheme only three generate an amount of finite-precision error that consistently propagates in the subsequent iterations and eventually makes the algorithm fail after a certain number of recursions. Moreover, it is shown that there is a very limited number of specific formulas that transmit the generated finite-precision error, while there is another class of formulas that lift or "relax" this error. In addition, a number of general propositions is presented that allow for the calculation of the exact number of erroneous digits with which the various quantities of the fast-Kalman scheme are computed, including the filter coefficients. On the basis of the previous analysis a method of stabilization of the fast-Kalman algorithm is developed and is presented here, a method that allows for the fast-Kalman algorithm to follow very difficult signals such as music, speech, environmental noise, and other nonstationary ones. Finally, a general methodology is pointed out, that allows for the development of new algorithms which, intrinsically, suffer far less of finite-precision problems.

  7. A fast algorithm for numerical solutions to Fortet's equation

    NASA Astrophysics Data System (ADS)

    Brumen, Gorazd

    2008-10-01

    A fast algorithm for computation of default times of multiple firms in a structural model is presented. The algorithm uses a multivariate extension of the Fortet's equation and the structure of Toeplitz matrices to significantly improve the computation time. In a financial market consisting of M[not double greater-than sign]1 firms and N discretization points in every dimension the algorithm uses O(nlogn·M·M!·NM(M-1)/2) operations, where n is the number of discretization points in the time domain. The algorithm is applied to firm survival probability computation and zero coupon bond pricing.

  8. Fast, Parallel and Secure Cryptography Algorithm Using Lorenz's Attractor

    NASA Astrophysics Data System (ADS)

    Marco, Anderson Gonçalves; Martinez, Alexandre Souto; Bruno, Odemir Martinez

    A novel cryptography method based on the Lorenz's attractor chaotic system is presented. The proposed algorithm is secure and fast, making it practical for general use. We introduce the chaotic operation mode, which provides an interaction among the password, message and a chaotic system. It ensures that the algorithm yields a secure codification, even if the nature of the chaotic system is known. The algorithm has been implemented in two versions: one sequential and slow and the other, parallel and fast. Our algorithm assures the integrity of the ciphertext (we know if it has been altered, which is not assured by traditional algorithms) and consequently its authenticity. Numerical experiments are presented, discussed and show the behavior of the method in terms of security and performance. The fast version of the algorithm has a performance comparable to AES, a popular cryptography program used commercially nowadays, but it is more secure, which makes it immediately suitable for general purpose cryptography applications. An internet page has been set up, which enables the readers to test the algorithm and also to try to break into the cipher.

  9. A fast portable implementation of the Secure Hash Algorithm, III.

    SciTech Connect

    McCurley, Kevin S.

    1992-10-01

    In 1992, NIST announced a proposed standard for a collision-free hash function. The algorithm for producing the hash value is known as the Secure Hash Algorithm (SHA), and the standard using the algorithm in known as the Secure Hash Standard (SHS). Later, an announcement was made that a scientist at NSA had discovered a weakness in the original algorithm. A revision to this standard was then announced as FIPS 180-1, and includes a slight change to the algorithm that eliminates the weakness. This new algorithm is called SHA-1. In this report we describe a portable and efficient implementation of SHA-1 in the C language. Performance information is given, as well as tips for porting the code to other architectures. We conclude with some observations on the efficiency of the algorithm, and a discussion of how the efficiency of SHA might be improved.

  10. Some important observations on fast decoupled load flow algorithm

    SciTech Connect

    Nanda, J.; Kothari, D.P.; Srivastava, S.C.

    1987-05-01

    This letter brings out clearly for the first time the relative importance and weightage of some of the assumptions made by B. Scott and O. Alsac in their fast decoupled load flow (FDLF) algorithm on its convergence property. Results have been obtained for two sample IEEE test systems. The conclusions of this work are envisaged to be of immense practical relevance while developing a fast decoupled load flow program.

  11. Fast prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Mein, Stephen James; Varley, Martin Roy; Ait-Boudaoud, Djamel

    2013-03-01

    The H.264/multiview video coding (MVC) standard has been developed to enable efficient coding for three-dimensional and multiple viewpoint video sequences. The inter-view statistical dependencies are utilized and an inter-view prediction is employed to provide more efficient coding; however, this increases the overall encoding complexity. Motion homogeneity is exploited here to selectively enable inter-view prediction, and to reduce complexity in the motion estimation (ME) and the mode selection processes. This has been accomplished by defining situations that relate macro-blocks' motion characteristics to the mode selection and the inter-view prediction processes. When comparing the proposed algorithm to the H.264/MVC reference software and other recent work, the experimental results demonstrate a significant reduction in ME time while maintaining similar rate-distortion performance.

  12. Computer program for fast Karhunen Loeve transform algorithm

    NASA Technical Reports Server (NTRS)

    Jain, A. K.

    1976-01-01

    The fast KL transform algorithm was applied for data compression of a set of four ERTS multispectral images and its performance was compared with other techniques previously studied on the same image data. The performance criteria used here are mean square error and signal to noise ratio. The results obtained show a superior performance of the fast KL transform coding algorithm on the data set used with respect to the above stated perfomance criteria. A summary of the results is given in Chapter I and details of comparisons and discussion on conclusions are given in Chapter IV.

  13. Fast sampling algorithm for Lie-Trotter products.

    PubMed

    Predescu, Cristian

    2005-04-01

    A fast algorithm for path sampling in path-integral Monte Carlo simulations is proposed. The algorithm utilizes the Lévy-Ciesielski implementation of Lie-Trotter products to achieve a mathematically proven computational cost of n log2(n) with the number of time slices n, despite the fact that each path variable is updated separately, for reasons of optimality. In this respect, we demonstrate that updating a group of random variables simultaneously results in loss of efficiency. PMID:15903719

  14. Fast algorithm for integrating inconsistent gradient fields.

    PubMed

    Rivera, M; Marroquin, J L; Servin, M; Rodriguez-Vera, R

    1997-11-10

    A discrete Fourier transform (DFT) based algorithm for solving a quadratic cost functional is proposed; this regularized functional allows one to obtain a consistent gradient field from an inconsistent one. The calculated consistent gradient may then be integrated by use of simple methods. The technique is presented in the context of the phase-unwrapping problem; however, it may be applied to other problems, such as shapes from shading (a robot-vision technique) when inconsistent gradient fields with irregular domains are obtained. The regularized functional introduced here has advantages over existing techniques; in particular, it is able to manage complex irregular domains and to interpolate over regions with invalid data without any smoothness assumptions over the rest of the lattice, so that the estimation error is reduced. Furthermore, there are no free parameters to adjust. The DFT is used to compute a preconditioner because there is highly efficient hardware to perform the calculations and also because it may be computed by optical means. PMID:18264380

  15. Fast image matching algorithm based on projection characteristics

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Yue, Xiaobo; Zhou, Lijun

    2011-06-01

    Based on analyzing the traditional template matching algorithm, this paper identified the key factors restricting the speed of matching and put forward a brand new fast matching algorithm based on projection. Projecting the grayscale image, this algorithm converts the two-dimensional information of the image into one-dimensional one, and then matches and identifies through one-dimensional correlation, meanwhile, because of normalization has been done, when the image brightness or signal amplitude increasing in proportion, it could also perform correct matching. Experimental results show that the projection characteristics based image registration method proposed in this article could greatly improve the matching speed, which ensuring the matching accuracy as well.

  16. A fast directional algorithm for high-frequency electromagnetic scattering

    SciTech Connect

    Tsuji, Paul; Ying Lexing

    2011-06-20

    This paper is concerned with the fast solution of high-frequency electromagnetic scattering problems using the boundary integral formulation. We extend the O(N log N) directional multilevel algorithm previously proposed for the acoustic scattering case to the vector electromagnetic case. We also detail how to incorporate the curl operator of the magnetic field integral equation into the algorithm. When combined with a standard iterative method, this results in an almost linear complexity solver for the combined field integral equations. In addition, the butterfly algorithm is utilized to compute the far field pattern and radar cross section with O(N log N) complexity.

  17. Fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1986-01-01

    A new least squares algorithm is proposed and investigated for fast frequency and phase acquisition of sinusoids in the presence of noise. This algorithm is a special case of more general, adaptive parameter-estimation techniques. The advantages of the algorithms are their conceptual simplicity, flexibility and applicability to general situations. For example, the frequency to be acquired can be time varying, and the noise can be nonGaussian, nonstationary and colored. As the proposed algorithm can be made recursive in the number of observations, it is not necessary to have a priori knowledge of the received signal-to-noise ratio or to specify the measurement time. This would be required for batch processing techniques, such as the fast Fourier transform (FFT). The proposed algorithm improves the frequency estimate on a recursive basis as more and more observations are obtained. When the algorithm is applied in real time, it has the extra advantage that the observations need not be stored. The algorithm also yields a real time confidence measure as to the accuracy of the estimator.

  18. MATLAB tensor classes for fast algorithm prototyping : source code.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-10-01

    We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.

  19. A fast algorithm for sparse matrix computations related to inversion

    NASA Astrophysics Data System (ADS)

    Li, S.; Wu, W.; Darve, E.

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green's functions Gr and G< for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors

  20. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-04-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  1. Fast parallel algorithm for slicing STL based on pipeline

    NASA Astrophysics Data System (ADS)

    Ma, Xulong; Lin, Feng; Yao, Bo

    2016-05-01

    In Additive Manufacturing field, the current researches of data processing mainly focus on a slicing process of large STL files or complicated CAD models. To improve the efficiency and reduce the slicing time, a parallel algorithm has great advantages. However, traditional algorithms can't make full use of multi-core CPU hardware resources. In the paper, a fast parallel algorithm is presented to speed up data processing. A pipeline mode is adopted to design the parallel algorithm. And the complexity of the pipeline algorithm is analyzed theoretically. To evaluate the performance of the new algorithm, effects of threads number and layers number are investigated by a serial of experiments. The experimental results show that the threads number and layers number are two remarkable factors to the speedup ratio. The tendency of speedup versus threads number reveals a positive relationship which greatly agrees with the Amdahl's law, and the tendency of speedup versus layers number also keeps a positive relationship agreeing with Gustafson's law. The new algorithm uses topological information to compute contours with a parallel method of speedup. Another parallel algorithm based on data parallel is used in experiments to show that pipeline parallel mode is more efficient. A case study at last shows a suspending performance of the new parallel algorithm. Compared with the serial slicing algorithm, the new pipeline parallel algorithm can make full use of the multi-core CPU hardware, accelerate the slicing process, and compared with the data parallel slicing algorithm, the new slicing algorithm in this paper adopts a pipeline parallel model, and a much higher speedup ratio and efficiency is achieved.

  2. A Matrix Computation View of the FastMap and RobustMap Dimension Reduction Algorithms

    SciTech Connect

    Ostrouchov, George

    2009-01-01

    Given a set of pairwise object distances and a dimension $k$, FastMap and RobustMap algorithms compute a set of $k$-dimensional coordinates for the objects. These metric space embedding methods implicitly assume a higher-dimensional coordinate representation and are a sequence of translations and orthogonal projections based on a sequence of object pair selections (called pivot pairs). We develop a matrix computation viewpoint of these algorithms that operates on the coordinate representation explicitly using Householder reflections. The resulting Coordinate Mapping Algorithm (CMA) is a fast approximate alternative to truncated principal component analysis (PCA) and it brings the FastMap and RobustMap algorithms into the mainstream of numerical computation where standard BLAS building blocks are used. Motivated by the geometric nature of the embedding methods, we further show that truncated PCA can be computed with CMA by specific pivot pair selections. Describing FastMap, RobustMap, and PCA as CMA computations with different pivot pair choices unifies the methods along a pivot pair selection spectrum. We also sketch connections to the semi-discrete decomposition and the QLP decomposition.

  3. Fast pixel shifting phase unwrapping algorithm in quantitative interferometric microscopy

    NASA Astrophysics Data System (ADS)

    Xu, Mingfei; Shan, Yanke; Yan, Keding; Xue, Liang; Wang, Shouyu; Liu, Fei

    2014-11-01

    Quantitative interferometric microscopy is an important method for observing biological samples such as cells and tissues. In order to obtain continuous phase distribution of the sample from the interferogram, phase extracting and phase unwrapping are both needed in quantitative interferometric microscopy. Phase extracting includes fast Fourier transform method and Hilbert transform method, etc., almost all of them are rapid methods. However, traditional unwrapping methods such as least squares algorithm, minimum network flow method, etc. are time-consuming to locate the phase discontinuities which lead to low processing efficiency. Other proposed high-speed phase unwrapping methods always need at least two interferograms to recover final phase distributions which cannot realize real time processing. Therefore, high-speed phase unwrapping algorithm for single interferogram is required to improve the calculation efficiency. Here, we propose a fast phase unwrapping algorithm to realize high-speed quantitative interferometric microscopy, by shifting mod 2π wrapped phase map for one pixel, then multiplying the original phase map and the shifted one, then the phase discontinuities location can be easily determined. Both numerical simulation and experiments confirm that the algorithm features fast, precise and reliable.

  4. CLAMMS: a scalable algorithm for calling common and rare copy number variants from exome sequencing data

    PubMed Central

    Packer, Jonathan S.; Maxwell, Evan K.; O’Dushlaine, Colm; Lopez, Alexander E.; Dewey, Frederick E.; Chernomorsky, Rostislav; Baras, Aris; Overton, John D.; Habegger, Lukas; Reid, Jeffrey G.

    2016-01-01

    Motivation: Several algorithms exist for detecting copy number variants (CNVs) from human exome sequencing read depth, but previous tools have not been well suited for large population studies on the order of tens or hundreds of thousands of exomes. Their limitations include being difficult to integrate into automated variant-calling pipelines and being ill-suited for detecting common variants. To address these issues, we developed a new algorithm—Copy number estimation using Lattice-Aligned Mixture Models (CLAMMS)—which is highly scalable and suitable for detecting CNVs across the whole allele frequency spectrum. Results: In this note, we summarize the methods and intended use-case of CLAMMS, compare it to previous algorithms and briefly describe results of validation experiments. We evaluate the adherence of CNV calls from CLAMMS and four other algorithms to Mendelian inheritance patterns on a pedigree; we compare calls from CLAMMS and other algorithms to calls from SNP genotyping arrays for a set of 3164 samples; and we use TaqMan quantitative polymerase chain reaction to validate CNVs predicted by CLAMMS at 39 loci (95% of rare variants validate; across 19 common variant loci, the mean precision and recall are 99% and 94%, respectively). In the Supplementary Materials (available at the CLAMMS Github repository), we present our methods and validation results in greater detail. Availability and implementation: https://github.com/rgcgithub/clamms (implemented in C). Contact: jeffrey.reid@regeneron.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26382196

  5. Improved genetic algorithm for fast path planning of USV

    NASA Astrophysics Data System (ADS)

    Cao, Lu

    2015-12-01

    Due to the complex constraints, more uncertain factors and critical real-time demand of path planning for USV(Unmanned Surface Vehicle), an approach of fast path planning based on voronoi diagram and improved Genetic Algorithm is proposed, which makes use of the principle of hierarchical path planning. First the voronoi diagram is utilized to generate the initial paths and then the optimal path is searched by using the improved Genetic Algorithm, which use multiprocessors parallel computing techniques to improve the traditional genetic algorithm. Simulation results verify that the optimal time is greatly reduced and path planning based on voronoi diagram and the improved Genetic Algorithm is more favorable in the real-time operation.

  6. The Empirical Mode Decomposition algorithm via Fast Fourier Transform

    NASA Astrophysics Data System (ADS)

    Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Artemyev, Dmitry N.; Khramov, Alexander G.

    2014-09-01

    In this paper we consider a problem of implementing a fast algorithm for the Empirical Mode Decomposition (EMD). EMD is one of the newest methods for decomposition of non-linear and non-stationary signals. A basis of EMD is formed "on-the-fly", i.e. it depends from a distribution of the signal and not given a priori in contrast on cases Fourier Transform (FT) or Wavelet Transform (WT). The EMD requires interpolating of local extrema sets of signal to find upper and lower envelopes. The data interpolation on an irregular lattice is a very low-performance procedure. A classical description of EMD by Huang suggests doing this through splines, i.e. through solving of a system of equations. Existence of a fast algorithm is the main advantage of the FT. A simple description of an algorithm in terms of Fast Fourier Transform (FFT) is a standard practice to reduce operation's count. We offer a fast implementation of EMD (FEMD) through FFT and some other cost-efficient algorithms. Basic two-stage interpolation algorithm for EMD is composed of a Upscale procedure through FFT and Downscale procedure through a selection procedure for signal's points. First we consider the local maxima (or minima) set without reference to the axis OX, i.e. on a regular lattice. The Upscale through the FFT change the signal's length to the Least Common Multiple (LCM) value of all distances between neighboring extremes on the axis OX. If the LCM value is too large then it is necessary to limit local set of extrema. In this case it is an analog of the spline interpolation. A demo for FEMD in noise reduction task for OCT has been shown.

  7. Fast wavelet based algorithms for linear evolution equations

    NASA Technical Reports Server (NTRS)

    Engquist, Bjorn; Osher, Stanley; Zhong, Sifen

    1992-01-01

    A class was devised of fast wavelet based algorithms for linear evolution equations whose coefficients are time independent. The method draws on the work of Beylkin, Coifman, and Rokhlin which they applied to general Calderon-Zygmund type integral operators. A modification of their idea is applied to linear hyperbolic and parabolic equations, with spatially varying coefficients. A significant speedup over standard methods is obtained when applied to hyperbolic equations in one space dimension and parabolic equations in multidimensions.

  8. SMG: Fast scalable greedy algorithm for influence maximization in social networks

    NASA Astrophysics Data System (ADS)

    Heidari, Mehdi; Asadpour, Masoud; Faili, Hesham

    2015-02-01

    Influence maximization is the problem of finding k most influential nodes in a social network. Many works have been done in two different categories, greedy approaches and heuristic approaches. The greedy approaches have better influence spread, but lower scalability on large networks. The heuristic approaches are scalable and fast but not for all type of networks. Improving the scalability of greedy approach is still an open and hot issue. In this work we present a fast greedy algorithm called State Machine Greedy that improves the existing algorithms by reducing calculations in two parts: (1) counting the traversing nodes in estimate propagation procedure, (2) Monte-Carlo graph construction in simulation of diffusion. The results show that our method makes a huge improvement in the speed over the existing greedy approaches.

  9. Fast algorithm of low power image reformation for OLED display

    NASA Astrophysics Data System (ADS)

    Lee, Myungwoo; Kim, Taewhan

    2014-04-01

    We propose a fast algorithm of low-power image reformation for organic light-emitting diode (OLED) display. The proposed algorithm scales the image histogram in a way to reduce power consumption in OLED display by remapping the gray levels of the pixels in the image based on the fast analysis of the histogram of the input image while maintaining contrast of the image. The key idea is that a large number of gray levels are never used in the images and these gray levels can be effectively exploited to reduce power consumption. On the other hand, to maintain the image contrast the gray level remapping is performed by taking into account the object size in the image to which each gray level is applied, that is, reforming little for the gray levels in the objects of large size. Through experiments with 24 Kodak images, it is shown that our proposed algorithm is able to reduce the power consumption by 10% even with 9% contrast enhancement. Our algorithm runs in a linear time so that it can be applied to moving pictures with high resolution.

  10. A Fast and Exact Algorithm for the Exemplar Breakpoint Distance.

    PubMed

    Shao, Mingfu; Moret, Bernard M E

    2016-05-01

    A fundamental problem in comparative genomics is to compute the distance between two genomes. For two genomes without duplicate genes, we can easily compute a variety of distance measures in linear time, but the problem is NP-hard under most models when genomes contain duplicate genes. Sankoff proposed the use of exemplars to tackle the problem of duplicate genes and gene families: each gene family is represented by a single gene (the exemplar for that family), chosen so as to optimize some metric. Unfortunately, choosing exemplars is itself an NP-hard problem. In this article, we propose a very fast and exact algorithm to compute the exemplar breakpoint distance, based on new insights in the underlying structure of genome rearrangements and exemplars. We evaluate the performance of our algorithm on simulation data and compare its performance to the best effort to date (a divide-and-conquer approach), showing that our algorithm runs much faster and scales much better. We also devise a new algorithm for the intermediate breakpoint distance problem, which can then be applied to assign orthologs. We compare our algorithm with the state-of-the-art method MSOAR by assigning orthologs among five well annotated mammalian genomes, showing that our algorithm runs much faster and is slightly more accurate than MSOAR. PMID:26953781

  11. A fast algorithm for sparse matrix computations related to inversion

    SciTech Connect

    Li, S.; Wu, W.; Darve, E.

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round

  12. Fast algorithm for calculating chemical kinetics in turbulent reacting flow

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.; Pratt, D. T.

    1986-01-01

    This paper addresses the need for a fast batch chemistry solver to perform the kinetics part of a split operator formulation of turbulent reacting flows, with special attention focused on the solution of the ordinary differential equations governing a homogeneous gas-phase chemical reaction. For this purpose, a two-part predictor-corrector algorithm which incorporates an exponentially fitted trapezoidal method was developed. The algorithm performs filtering of ill-posed initial conditions, automatic step-size selection, and automatic selection of Jacobi-Newton or Newton-Raphson iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm, termed CREK1D (combustion reaction kinetics, one-dimensional), compared favorably with the code LSODE when tested on two representative problems drawn from combustion kinetics, and is faster than LSODE.

  13. A fast image encryption algorithm based on chaotic map

    NASA Astrophysics Data System (ADS)

    Liu, Wenhao; Sun, Kehui; Zhu, Congxu

    2016-09-01

    Derived from Sine map and iterative chaotic map with infinite collapse (ICMIC), a new two-dimensional Sine ICMIC modulation map (2D-SIMM) is proposed based on a close-loop modulation coupling (CMC) model, and its chaotic performance is analyzed by means of phase diagram, Lyapunov exponent spectrum and complexity. It shows that this map has good ergodicity, hyperchaotic behavior, large maximum Lyapunov exponent and high complexity. Based on this map, a fast image encryption algorithm is proposed. In this algorithm, the confusion and diffusion processes are combined for one stage. Chaotic shift transform (CST) is proposed to efficiently change the image pixel positions, and the row and column substitutions are applied to scramble the pixel values simultaneously. The simulation and analysis results show that this algorithm has high security, low time complexity, and the abilities of resisting statistical analysis, differential, brute-force, known-plaintext and chosen-plaintext attacks.

  14. A Fast Conformal Mapping Algorithm with No FFT

    NASA Astrophysics Data System (ADS)

    Luchini, P.; Manzo, F.

    1992-08-01

    An algorithm is presented for the computation of a conformal mapping discretized on a non-uniformly spaced point set, useful for the numerical solution of many problems of fluid dynamics. Most existing iterative techniques, both those having a linear and those having a quadratic type of convergence, rely on the fast Fourier transform ( FFT) algorithm for calculating a convolution integral which represents the most time-consuming phase of the computation. The FFT, however, definitely cannot be applied to a non-uniform spacing. The algorithm presented in this paper has been made possible by the construction of a calculation method for convolution integrals which, despite not using an FFT, maintains a computation time of the same order as that of the FFT. The new technique is successfully applied to the problem of conformally mapping a closely spaced cascade of airfoils onto a circle, which requires an exceedingly large number of points if it is solved with uniform spacing.

  15. A non-parametric peak calling algorithm for DamID-Seq.

    PubMed

    Li, Renhua; Hempel, Leonie U; Jiang, Tingbo

    2015-01-01

    Protein-DNA interactions play a significant role in gene regulation and expression. In order to identify transcription factor binding sites (TFBS) of double sex (DSX)-an important transcription factor in sex determination, we applied the DNA adenine methylation identification (DamID) technology to the fat body tissue of Drosophila, followed by deep sequencing (DamID-Seq). One feature of DamID-Seq data is that induced adenine methylation signals are not assured to be symmetrically distributed at TFBS, which renders the existing peak calling algorithms for ChIP-Seq, including SPP and MACS, inappropriate for DamID-Seq data. This challenged us to develop a new algorithm for peak calling. A challenge in peaking calling based on sequence data is estimating the averaged behavior of background signals. We applied a bootstrap resampling method to short sequence reads in the control (Dam only). After data quality check and mapping reads to a reference genome, the peaking calling procedure compromises the following steps: 1) reads resampling; 2) reads scaling (normalization) and computing signal-to-noise fold changes; 3) filtering; 4) Calling peaks based on a statistically significant threshold. This is a non-parametric method for peak calling (NPPC). We also used irreproducible discovery rate (IDR) analysis, as well as ChIP-Seq data to compare the peaks called by the NPPC. We identified approximately 6,000 peaks for DSX, which point to 1,225 genes related to the fat body tissue difference between female and male Drosophila. Statistical evidence from IDR analysis indicated that these peaks are reproducible across biological replicates. In addition, these peaks are comparable to those identified by use of ChIP-Seq on S2 cells, in terms of peak number, location, and peaks width. PMID:25785608

  16. 3DRISM Multigrid Algorithm for Fast Solvation Free Energy Calculations.

    PubMed

    Sergiievskyi, Volodymyr P; Fedorov, Maxim V

    2012-06-12

    In this paper we present a fast and accurate method for modeling solvation properties of organic molecules in water with a main focus on predicting solvation (hydration) free energies of small organic compounds. The method is based on a combination of (i) a molecular theory, three-dimensional reference interaction sites model (3DRISM); (ii) a fast multigrid algorithm for solving the high-dimensional 3DRISM integral equations; and (iii) a recently introduced universal correction (UC) for the 3DRISM solvation free energies by properly scaled molecular partial volume (3DRISM-UC, Palmer et al., J. Phys.: Condens. Matter2010, 22, 492101). A fast multigrid algorithm is the core of the method because it helps to reduce the high computational costs associated with solving the 3DRISM equations. To facilitate future applications of the method, we performed benchmarking of the algorithm on a set of several model solutes in order to find optimal grid parameters and to test the performance and accuracy of the algorithm. We have shown that the proposed new multigrid algorithm is on average 24 times faster than the simple Picard method and at least 3.5 times faster than the MDIIS method which is currently actively used by the 3DRISM community (e.g., the MDIIS method has been recently implemented in a new 3DRISM implicit solvent routine in the recent release of the AmberTools 1.4 molecular modeling package (Luchko et al. J. Chem. Theory Comput. 2010, 6, 607-624). Then we have benchmarked the multigrid algorithm with chosen optimal parameters on a set of 99 organic compounds. We show that average computational time required for one 3DRISM calculation is 3.5 min per a small organic molecule (10-20 atoms) on a standard personal computer. We also benchmarked predicted solvation free energy values for all of the compounds in the set against the corresponding experimental data. We show that by using the proposed multigrid algorithm and the 3DRISM-UC model, it is possible to obtain good

  17. A novel fast median filter algorithm without sorting

    NASA Astrophysics Data System (ADS)

    Yang, Weiping; Zhang, Zhilong; Lu, Xinping; Li, Jicheng; Chen, Dong; Yang, Guopeng

    2016-04-01

    As one of widely applied nonlinear smoothing filtering methods, median filter is quite effective for removing salt-andpepper noise and impulsive noise while maintaining image edge information without blurring its boundaries, but its computation load is the maximal drawback while applied in real-time processing systems. In order to solve the issue, researchers have proposed many effective fast algorithms and published many papers. However most of the algorithms are based on sorting operations so as to make real-time implementation difficult. In this paper considering the large scale Boolean calculation function and convenient shift operation which are two of the advantages of FPGA(Field Programmable Gate Array), we proposed a novel median value finding algorithm without sorting, which can find the median value effectively and its performing time almost keeps changeless despite how large the filter radius is. Based on the algorithm, a real-time median filter has been realized. A lot of tests demonstrate the validity and correctness of proposed algorithm.

  18. A fast contour descriptor algorithm for supernova imageclassification

    SciTech Connect

    Aragon, Cecilia R.; Aragon, David Bradburn

    2006-07-16

    We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape-detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.

  19. Fast independent component analysis algorithm for quaternion valued signals.

    PubMed

    Javidi, Soroush; Took, Clive Cheong; Mandic, Danilo P

    2011-12-01

    An extension of the fast independent component analysis algorithm is proposed for the blind separation of both Q-proper and Q-improper quaternion-valued signals. This is achieved by maximizing a negentropy-based cost function, and is derived rigorously using the recently developed HR calculus in order to implement Newton optimization in the augmented quaternion statistics framework. It is shown that the use of augmented statistics and the associated widely linear modeling provides theoretical and practical advantages when dealing with general quaternion signals with noncircular (rotation-dependent) distributions. Simulations using both benchmark and real-world quaternion-valued signals support the approach. PMID:22027374

  20. A fast direct sampling algorithm for equilateral closed polygons

    NASA Astrophysics Data System (ADS)

    Cantarella, Jason; Duplantier, Bertrand; Shonkwiler, Clayton; Uehara, Erica

    2016-07-01

    Sampling equilateral closed polygons is of interest in the statistical study of ring polymers. Over the past 30 years, previous authors have proposed a variety of simple Markov chain algorithms (but have not been able to show that they converge to the correct probability distribution) and complicated direct samplers (which require extended-precision arithmetic to evaluate numerically unstable polynomials). We present a simple direct sampler which is fast and numerically stable, and analyze its runtime using a new formula for the volume of equilateral polygon space as a Dirichlet-type integral.

  1. A fast-marching like algorithm for geometrical shock dynamics

    NASA Astrophysics Data System (ADS)

    Noumir, Y.; Le Guilcher, A.; Lardjane, N.; Monneau, R.; Sarrazin, A.

    2015-03-01

    We develop a new algorithm for the computation of the Geometrical Shock Dynamics (GSD) model. The method relies on the fast-marching paradigm and enables the discrete evaluation of the first arrival time of a shock wave and its local velocity on a Cartesian grid. The proposed algorithm is based on a first order upwind finite difference scheme and reduces to a local nonlinear system of two equations solved by an iterative procedure. Reference solutions are built for a smooth radial configuration and for the 2D Riemann problem. The link between the GSD model and p-systems is given. Numerical experiments demonstrate the efficiency of the scheme and its ability to handle singularities.

  2. Fast Particle Pair Detection Algorithms for Particle Simulations

    NASA Astrophysics Data System (ADS)

    Iwai, T.; Hong, C.-W.; Greil, P.

    New algorithms with O(N) complexity have been developed for fast particle-pair detections in particle simulations like the discrete element method (DEM) and molecular dynamic (MD). They exhibit robustness against broad particle size distributions when compared with conventional boxing methods. Almost similar calculation speeds are achieved at particle size distributions from is mono-size to 1:10 while the linked-cell method results in calculations more than 20 times. The basic algorithm, level-boxing, uses the variable search range according to each particle. The advanced method, multi-level boxing, employs multiple cell layers to reduce the particle size discrepancy. Another method, indexed-level boxing, reduces the size of cell arrays by introducing the hash procedure to access the cell array, and is effective for sparse particle systems with a large number of particles.

  3. Fast clustering algorithm for codebook production in image vector quantization

    NASA Astrophysics Data System (ADS)

    Al-Otum, Hazem M.

    2001-04-01

    In this paper, a fast clustering algorithm (FCA) is proposed to be implemented in vector quantization codebook production. This algorithm gives the ability to avoid iterative averaging of vectors and is based on collecting vectors with similar or closely similar characters to produce corresponding clusters. FCA gives an increase in peak signal-to-noise ratio (PSNR) about 0.3 - 1.1 dB, over the LBG algorithm and reduces the computational cost for codebook production (10% - 60%) at different bit rates. Here, two FCA modifications are proposed: FCA with limited cluster size 1& (FCA-LCS1 and FCA-LCS2, respectively). FCA- LCS1 tends to subdivide large clusters into smaller ones while FCA-LCS2 reduces a predetermined threshold by a step to reach the required cluster size. The FCA-LCS1 and FCA- LCS2 give an increase in PSNR of about 0.9 - 1.0 and 0.9 - 1.1 dB, respectively, over the FCA algorithm, at the expense of about 15% - 25% and 18% - 28% increase in the output codebook size.

  4. Fast and fully automatic phalanx segmentation using a grayscale-histogram morphology algorithm

    NASA Astrophysics Data System (ADS)

    Hsieh, Chi-Wen; Liu, Tzu-Chiang; Jong, Tai-Lang; Chen, Chih-Yen; Tiu, Chui-Mei; Chan, Din-Yuen

    2011-08-01

    Bone age assessment is a common radiological examination used in pediatrics to diagnose the discrepancy between the skeletal and chronological age of a child; therefore, it is beneficial to develop a computer-based bone age assessment to help junior pediatricians estimate bone age easily. Unfortunately, the phalanx on radiograms is not easily separated from the background and soft tissue. Therefore, we proposed a new method, called the grayscale-histogram morphology algorithm, to segment the phalanges fast and precisely. The algorithm includes three parts: a tri-stage sieve algorithm used to eliminate the background of hand radiograms, a centroid-edge dual scanning algorithm to frame the phalanx region, and finally a segmentation algorithm based on disk traverse-subtraction filter to segment the phalanx. Moreover, two more segmentation methods: adaptive two-mean and adaptive two-mean clustering were performed, and their results were compared with the segmentation algorithm based on disk traverse-subtraction filter using five indices comprising misclassification error, relative foreground area error, modified Hausdorff distances, edge mismatch, and region nonuniformity. In addition, the CPU time of the three segmentation methods was discussed. The result showed that our method had a better performance than the other two methods. Furthermore, satisfactory segmentation results were obtained with a low standard error.

  5. Fast Field Calibration of MIMU Based on the Powell Algorithm

    PubMed Central

    Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang

    2014-01-01

    The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801

  6. A fast poly-energetic iterative FBP algorithm

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Samei, Ehsan

    2014-04-01

    The beam hardening (BH) effect can influence medical interpretations in two notable ways. First, high attenuation materials, such as bones, can induce strong artifacts, which severely deteriorate the image quality. Second, voxel values can significantly deviate from the real values, which can lead to unreliable quantitative evaluation results. Some iterative methods have been proposed to eliminate the BH effect, but they cannot be widely applied for clinical practice because of the slow computational speed. The purpose of this study was to develop a new fast and practical poly-energetic iterative filtered backward projection algorithm (piFBP). The piFBP is composed of a novel poly-energetic forward projection process and a robust FBP-type backward updating process. In the forward projection process, an adaptive base material decomposition method is presented, based on which diverse body tissues (e.g., lung, fat, breast, soft tissue, and bone) and metal implants can be incorporated to accurately evaluate poly-energetic forward projections. In the backward updating process, one robust and fast FBP-type backward updating equation with a smoothing kernel is introduced to avoid the noise accumulation in the iteration process and to improve the convergence properties. Two phantoms were designed to quantitatively validate our piFBP algorithm in terms of the beam hardening index (BIdx) and the noise index (NIdx). The simulation results showed that piFBP possessed fast convergence speed, as the images could be reconstructed within four iterations. The variation range of the BIdx's of various tissues across phantom size and spectrum were reduced from [-7.5, 17.5] for FBP to [-0.1, 0.1] for piFBP while the NIdx's were maintained in the same low level (about [0.3, 1.7]). When a metal implant presented in a complex phantom, piFBP still had excellent reconstruction performance, as the variation range of the BIdx's of body tissues were reduced from [-2.9, 15.9] for FBP to [-0

  7. Fast Dating Using Least-Squares Criteria and Algorithms

    PubMed Central

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley–Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley–Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to

  8. Fast Dating Using Least-Squares Criteria and Algorithms.

    PubMed

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  9. A fast sorting algorithm for a hypersonic rarefied flow particle simulation on the connection machine

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1989-01-01

    The data parallel implementation of a particle simulation for hypersonic rarefied flow described by Dagum associates a single parallel data element with each particle in the simulation. The simulated space is divided into discrete regions called cells containing a variable and constantly changing number of particles. The implementation requires a global sort of the parallel data elements so as to arrange them in an order that allows immediate access to the information associated with cells in the simulation. Described here is a very fast algorithm for performing the necessary ranking of the parallel data elements. The performance of the new algorithm is compared with that of the microcoded instruction for ranking on the Connection Machine.

  10. Fast imaging system and algorithm for monitoring microlymphatics

    NASA Astrophysics Data System (ADS)

    Akl, T.; Rahbar, E.; Zawieja, D.; Gashev, A.; Moore, J.; Coté, G.

    2010-02-01

    The lymphatic system is not well understood and tools to quantify aspects of its behavior are needed. A technique to monitor lymph velocity that can lead to flow, the main determinant of transport, in a near real time manner can be extremely valuable. We recently built a new system that measures lymph velocity, vessel diameter and contractions using optical microscopy digital imaging with a high speed camera (500fps) and a complex processing algorithm. The processing time for a typical data period was significantly reduced to less than 3 minutes in comparison to our previous system in which readings were available 30 minutes after the vessels were imaged. The processing was based on a correlation algorithm in the frequency domain, which, along with new triggering methods, reduced the processing and acquisition time significantly. In addition, the use of a new data filtering technique allowed us to acquire results from recordings that were irresolvable by the previous algorithm due to their high noise level. The algorithm was tested by measuring velocities and diameter changes in rat mesenteric micro-lymphatics. We recorded velocities of 0.25mm/s on average in vessels of diameter ranging from 54um to 140um with phasic contraction strengths of about 6 to 40%. In the future, this system will be used to monitor acute effects that are too fast for previous systems and will also increase the statistical power when dealing with chronic changes. Furthermore, we plan on expanding its functionality to measure the propagation of the contractile activity.

  11. Fast Adapting Ensemble: A New Algorithm for Mining Data Streams with Concept Drift

    PubMed Central

    Ortíz Díaz, Agustín; Ramos-Jiménez, Gonzalo; Frías Blanco, Isvani; Caballero Mota, Yailé; Morales-Bueno, Rafael

    2015-01-01

    The treatment of large data streams in the presence of concept drifts is one of the main challenges in the field of data mining, particularly when the algorithms have to deal with concepts that disappear and then reappear. This paper presents a new algorithm, called Fast Adapting Ensemble (FAE), which adapts very quickly to both abrupt and gradual concept drifts, and has been specifically designed to deal with recurring concepts. FAE processes the learning examples in blocks of the same size, but it does not have to wait for the batch to be complete in order to adapt its base classification mechanism. FAE incorporates a drift detector to improve the handling of abrupt concept drifts and stores a set of inactive classifiers that represent old concepts, which are activated very quickly when these concepts reappear. We compare our new algorithm with various well-known learning algorithms, taking into account, common benchmark datasets. The experiments show promising results from the proposed algorithm (regarding accuracy and runtime), handling different types of concept drifts. PMID:25879051

  12. A fast algorithm for the phonemic segmentation of continuous speech

    NASA Astrophysics Data System (ADS)

    Smidt, D.

    1986-04-01

    The method of differential learning (DL method) was applied to the fast phonemic classification of acoustic speech spectra. The method was also tested with a simple algorithm for continuous speech recognition. In every learning step of the DL method only that single pattern component which deviates most from the reference value is used for a new rule. Several rules of this type were connected in a conjunctive or disjunctive way. Tests with a single speaker demonstrate good classification capability and a very high speed. The inclusion of automatically additional features selected according to their relevance is discussed. It is shown that there exists a correspondence between processes related to the DL method and pattern recognition in living beings with their ability for generalization and differentiation.

  13. Fast Probabilistic Particle Identification algorithm using silicon strip detectors

    NASA Astrophysics Data System (ADS)

    Di Fino, L.; Zaconte, V.; Ciccotelli, A.; Larosa, M.; Narici, L.

    2012-08-01

    Active detectors used as radiation monitors in space are not usually able to perform Particle Identification (PID). Common techniques need energy loss spectra with high statistics to estimate ion abundances. The ALTEA-space detector system is a set of silicon strip particle telescopes monitoring the radiation environment on board the International Space Station since July 2006 with real-time telemetry capabilities. Its large geometrical factor due to the concurrent use of six detectors permits the acquisition of good energy loss spectra even in a short period of observation. In this paper we present a novel Fast Probabilistic Particle Identification (FPPI) algorithm developed for the ALTEA data analysis in order to perform nuclear identification with low statistics and, with some limitations, also in real time.

  14. Generation of fast neturon spectra using an adaptive Gauss-Kronrod Quadrature algorithm

    NASA Astrophysics Data System (ADS)

    Triplett, Brian Scott

    A lattice physics calculation is often the first step in analyzing a nuclear reactor. This calculation condenses regions of the reactor into average parameters (i.e., group constants) that can be used in coarser full-core, time-dependent calculations. This work presents a high-fidelity deterministic method for calculating the neutron energy spectrum in an infinite medium. The spectrum resulting from this calculation can be used to generate accurate group constants. This method includes a numerical algorithm based on Gauss-Kronrod Quadrature to determine the neutron transfer source to a given energy while controlling numerical error. This algorithm was implemented in a pointwise transport solver program called Pointwise Fast Spectrum Generator (PWFSG). PWFSG was benchmarked against the Monte Carlo program MCNP and another pointwise spectrum generation program, CENTRM, for a set of fast reactor infinite medium example cases. PWFSG showed good agreement with MCNP, yielding coefficients of determination above 98% for all example cases. In addition, PWFSG had 6 to 8 times lower flux estimation error than CENTRM in the cases examined. With run-times comparable to CENTRM, PWFSG represents a robust set of methods for generation of fast neutron spectra with increased accuracy without increased computational cost.

  15. Fast volume rendering algorithm in a virtual endoscopy system

    NASA Astrophysics Data System (ADS)

    Kim, Sang H.; Kim, Jin K.; Ra, Jong Beom

    2002-05-01

    Recently, 3D virtual endoscopy has been used as an alternative noninvasive procedure for visualization of a hollow organ. In this paper, we propose a fast volume rendering scheme based on perspective ray casting for virtual endoscopy. As a pre-processing step, the algorithm divides a volume into hierarchical blocks and classifies them into opaque or transparent blocks. Then, the rendering procedure is as follows. In the first step, we perform ray casting only for sub-sampled pixels on the image plane, and determine their pixel values and depth information. In the second step, by reducing the sub-sampling factor by half, we repeat ray casting for newly added pixels, and their pixel values and depth information are determined. Here, the previously obtained depth information is utilized to reduce the processing time. This step is performed recursively until the full-size rendering image is acquired. Experiments conducted on a PC shows that the proposed algorithm can reduce the rendering time by 70-80% for the bronchus and colon endoscopy, compared with the brute-force ray casting scheme. Thereby, interactive rendering becomes more realizable in a PC environment without any specific hardware.

  16. Fast half-sibling population reconstruction: theory and algorithms

    PubMed Central

    2013-01-01

    Background Kinship inference is the task of identifying genealogically related individuals. Kinship information is important for determining mating structures, notably in endangered populations. Although many solutions exist for reconstructing full sibling relationships, few exist for half-siblings. Results We consider the problem of determining whether a proposed half-sibling population reconstruction is valid under Mendelian inheritance assumptions. We show that this problem is NP-complete and provide a 0/1 integer program that identifies the minimum number of individuals that must be removed from a population in order for the reconstruction to become valid. We also present SibJoin, a heuristic-based clustering approach based on Mendelian genetics, which is strikingly fast. The software is available at http://github.com/ddexter/SibJoin.git+. Conclusions Our SibJoin algorithm is reasonably accurate and thousands of times faster than existing algorithms. The heuristic is used to infer a half-sibling structure for a population which was, until recently, too large to evaluate. PMID:23849037

  17. Fast and accurate analysis of large-scale composite structures with the parallel multilevel fast multipole algorithm.

    PubMed

    Ergül, Özgür; Gürel, Levent

    2013-03-01

    Accurate electromagnetic modeling of complicated optical structures poses several challenges. Optical metamaterial and plasmonic structures are composed of multiple coexisting dielectric and/or conducting parts. Such composite structures may possess diverse values of conductivities and dielectric constants, including negative permittivity and permeability. Further challenges are the large sizes of the structures with respect to wavelength and the complexities of the geometries. In order to overcome these challenges and to achieve rigorous and efficient electromagnetic modeling of three-dimensional optical composite structures, we have developed a parallel implementation of the multilevel fast multipole algorithm (MLFMA). Precise formulation of composite structures is achieved with the so-called "electric and magnetic current combined-field integral equation." Surface integral equations are carefully discretized with piecewise linear basis functions, and the ensuing dense matrix equations are solved iteratively with parallel MLFMA. The hierarchical strategy is used for the efficient parallelization of MLFMA on distributed-memory architectures. In this paper, fast and accurate solutions of large-scale canonical and complicated real-life problems, such as optical metamaterials, discretized with tens of millions of unknowns are presented in order to demonstrate the capabilities of the proposed electromagnetic solver. PMID:23456127

  18. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor

    PubMed Central

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-01-01

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University’s datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy. PMID:26287198

  19. A Fast Robot Identification and Mapping Algorithm Based on Kinect Sensor.

    PubMed

    Zhang, Liang; Shen, Peiyi; Zhu, Guangming; Wei, Wei; Song, Houbing

    2015-01-01

    Internet of Things (IoT) is driving innovation in an ever-growing set of application domains such as intelligent processing for autonomous robots. For an autonomous robot, one grand challenge is how to sense its surrounding environment effectively. The Simultaneous Localization and Mapping with RGB-D Kinect camera sensor on robot, called RGB-D SLAM, has been developed for this purpose but some technical challenges must be addressed. Firstly, the efficiency of the algorithm cannot satisfy real-time requirements; secondly, the accuracy of the algorithm is unacceptable. In order to address these challenges, this paper proposes a set of novel improvement methods as follows. Firstly, the ORiented Brief (ORB) method is used in feature detection and descriptor extraction. Secondly, a bidirectional Fast Library for Approximate Nearest Neighbors (FLANN) k-Nearest Neighbor (KNN) algorithm is applied to feature match. Then, the improved RANdom SAmple Consensus (RANSAC) estimation method is adopted in the motion transformation. In the meantime, high precision General Iterative Closest Points (GICP) is utilized to register a point cloud in the motion transformation optimization. To improve the accuracy of SLAM, the reduced dynamic covariance scaling (DCS) algorithm is formulated as a global optimization problem under the G2O framework. The effectiveness of the improved algorithm has been verified by testing on standard data and comparing with the ground truth obtained on Freiburg University's datasets. The Dr Robot X80 equipped with a Kinect camera is also applied in a building corridor to verify the correctness of the improved RGB-D SLAM algorithm. With the above experiments, it can be seen that the proposed algorithm achieves higher processing speed and better accuracy. PMID:26287198

  20. Base-Calling Algorithm with Vocabulary (BCV) Method for Analyzing Population Sequencing Chromatograms

    PubMed Central

    Fantin, Yuri S.; Neverov, Alexey D.; Favorov, Alexander V.; Alvarez-Figueroa, Maria V.; Braslavskaya, Svetlana I.; Gordukova, Maria A.; Karandashova, Inga V.; Kuleshov, Konstantin V.; Myznikova, Anna I.; Polishchuk, Maya S.; Reshetov, Denis A.; Voiciehovskaya, Yana A.; Mironov, Andrei A.; Chulanov, Vladimir P.

    2013-01-01

    Sanger sequencing is a common method of reading DNA sequences. It is less expensive than high-throughput methods, and it is appropriate for numerous applications including molecular diagnostics. However, sequencing mixtures of similar DNA of pathogens with this method is challenging. This is important because most clinical samples contain such mixtures, rather than pure single strains. The traditional solution is to sequence selected clones of PCR products, a complicated, time-consuming, and expensive procedure. Here, we propose the base-calling with vocabulary (BCV) method that computationally deciphers Sanger chromatograms obtained from mixed DNA samples. The inputs to the BCV algorithm are a chromatogram and a dictionary of sequences that are similar to those we expect to obtain. We apply the base-calling function on a test dataset of chromatograms without ambiguous positions, as well as one with 3–14% sequence degeneracy. Furthermore, we use BCV to assemble a consensus sequence for an HIV genome fragment in a sample containing a mixture of viral DNA variants and to determine the positions of the indels. Finally, we detect drug-resistant Mycobacterium tuberculosis strains carrying frameshift mutations mixed with wild-type bacteria in the pncA gene, and roughly characterize bacterial communities in clinical samples by direct 16S rRNA sequencing. PMID:23382983

  1. Biased Randomized Algorithm for Fast Model-Based Diagnosis

    NASA Technical Reports Server (NTRS)

    Williams, Colin; Vartan, Farrokh

    2005-01-01

    A biased randomized algorithm has been developed to enable the rapid computational solution of a propositional- satisfiability (SAT) problem equivalent to a diagnosis problem. The closest competing methods of automated diagnosis are described in the preceding article "Fast Algorithms for Model-Based Diagnosis" and "Two Methods of Efficient Solution of the Hitting-Set Problem" (NPO-30584), which appears elsewhere in this issue. It is necessary to recapitulate some of the information from the cited articles as a prerequisite to a description of the present method. As used here, "diagnosis" signifies, more precisely, a type of model-based diagnosis in which one explores any logical inconsistencies between the observed and expected behaviors of an engineering system. The function of each component and the interconnections among all the components of the engineering system are represented as a logical system. Hence, the expected behavior of the engineering system is represented as a set of logical consequences. Faulty components lead to inconsistency between the observed and expected behaviors of the system, represented by logical inconsistencies. Diagnosis - the task of finding the faulty components - reduces to finding the components, the abnormalities of which could explain all the logical inconsistencies. One seeks a minimal set of faulty components (denoted a minimal diagnosis), because the trivial solution, in which all components are deemed to be faulty, always explains all inconsistencies. In the methods of the cited articles, the minimal-diagnosis problem is treated as equivalent to a minimal-hitting-set problem, which is translated from a combinatorial to a computational problem by mapping it onto the Boolean-satisfiability and integer-programming problems. The integer-programming approach taken in one of the prior methods is complete (in the sense that it is guaranteed to find a solution if one exists) and slow and yields a lower bound on the size of the

  2. Distress Calls of a Fast-Flying Bat (Molossus molossus) Provoke Inspection Flights but Not Cooperative Mobbing.

    PubMed

    Carter, Gerald; Schoeppler, Diana; Manthey, Marie; Knörnschild, Mirjam; Denzinger, Annette

    2015-01-01

    Many birds and mammals produce distress calls when captured. Bats often approach speakers playing conspecific distress calls, which has led to the hypothesis that bat distress calls promote cooperative mobbing. An alternative explanation is that approaching bats are selfishly assessing predation risk. Previous playback studies on bat distress calls involved species with highly maneuverable flight, capable of making close passes and tight circles around speakers, which can look like mobbing. We broadcast distress calls recorded from the velvety free-tailed bat, Molossus molossus, a fast-flying aerial-hawker with relatively poor maneuverability. Based on their flight behavior, we predicted that, in response to distress call playbacks, M. molossus would make individual passing inspection flights but would not approach in groups or approach within a meter of the distress call source. By recording responses via ultrasonic recording and infrared video, we found that M. molossus, and to a lesser extent Saccopteryx bilineata, made more flight passes during distress call playbacks compared to noise. However, only the more maneuverable S. bilineata made close approaches to the speaker, and we found no evidence of mobbing in groups. Instead, our findings are consistent with the hypothesis that single bats approached distress calls simply to investigate the situation. These results suggest that approaches by bats to distress calls should not suffice as clear evidence for mobbing. PMID:26353118

  3. Distress Calls of a Fast-Flying Bat (Molossus molossus) Provoke Inspection Flights but Not Cooperative Mobbing

    PubMed Central

    Carter, Gerald; Schoeppler, Diana; Manthey, Marie; Knörnschild, Mirjam; Denzinger, Annette

    2015-01-01

    Many birds and mammals produce distress calls when captured. Bats often approach speakers playing conspecific distress calls, which has led to the hypothesis that bat distress calls promote cooperative mobbing. An alternative explanation is that approaching bats are selfishly assessing predation risk. Previous playback studies on bat distress calls involved species with highly maneuverable flight, capable of making close passes and tight circles around speakers, which can look like mobbing. We broadcast distress calls recorded from the velvety free-tailed bat, Molossus molossus, a fast-flying aerial-hawker with relatively poor maneuverability. Based on their flight behavior, we predicted that, in response to distress call playbacks, M. molossus would make individual passing inspection flights but would not approach in groups or approach within a meter of the distress call source. By recording responses via ultrasonic recording and infrared video, we found that M. molossus, and to a lesser extent Saccopteryx bilineata, made more flight passes during distress call playbacks compared to noise. However, only the more maneuverable S. bilineata made close approaches to the speaker, and we found no evidence of mobbing in groups. Instead, our findings are consistent with the hypothesis that single bats approached distress calls simply to investigate the situation. These results suggest that approaches by bats to distress calls should not suffice as clear evidence for mobbing. PMID:26353118

  4. An algorithm for fast DNS cavitating flows simulations using homogeneous mixture approach

    NASA Astrophysics Data System (ADS)

    Žnidarčič, A.; Coutier-Delgosha, O.; Marquillie, M.; Dular, M.

    2015-12-01

    A new algorithm for fast DNS cavitating flows simulations is developed. The algorithm is based on Kim and Moin projection method form. Homogeneous mixture approach with transport equation for vapour volume fraction is used to model cavitation and various cavitation models can be used. Influence matrix and matrix diagonalisation technique enable fast parallel computations.

  5. FastTagger: an efficient algorithm for genome-wide tag SNP selection using multi-marker linkage disequilibrium

    PubMed Central

    2010-01-01

    Background Human genome contains millions of common single nucleotide polymorphisms (SNPs) and these SNPs play an important role in understanding the association between genetic variations and human diseases. Many SNPs show correlated genotypes, or linkage disequilibrium (LD), thus it is not necessary to genotype all SNPs for association study. Many algorithms have been developed to find a small subset of SNPs called tag SNPs that are sufficient to infer all the other SNPs. Algorithms based on the r2 LD statistic have gained popularity because r2 is directly related to statistical power to detect disease associations. Most of existing r2 based algorithms use pairwise LD. Recent studies show that multi-marker LD can help further reduce the number of tag SNPs. However, existing tag SNP selection algorithms based on multi-marker LD are both time-consuming and memory-consuming. They cannot work on chromosomes containing more than 100 k SNPs using length-3 tagging rules. Results We propose an efficient algorithm called FastTagger to calculate multi-marker tagging rules and select tag SNPs based on multi-marker LD. FastTagger uses several techniques to reduce running time and memory consumption. Our experiment results show that FastTagger is several times faster than existing multi-marker based tag SNP selection algorithms, and it consumes much less memory at the same time. As a result, FastTagger can work on chromosomes containing more than 100 k SNPs using length-3 tagging rules. FastTagger also produces smaller sets of tag SNPs than existing multi-marker based algorithms, and the reduction ratio ranges from 3%-9% when length-3 tagging rules are used. The generated tagging rules can also be used for genotype imputation. We studied the prediction accuracy of individual rules, and the average accuracy is above 96% when r2 ≥ 0.9. Conclusions Generating multi-marker tagging rules is a computation intensive task, and it is the bottleneck of existing multi-marker based tag

  6. A fast weak motif-finding algorithm based on community detection in graphs

    PubMed Central

    2013-01-01

    Background Identification of transcription factor binding sites (also called ‘motif discovery’) in DNA sequences is a basic step in understanding genetic regulation. Although many successful programs have been developed, the problem is far from being solved on account of diversity in gene expression/regulation and the low specificity of binding sites. State-of-the-art algorithms have their own constraints (e.g., high time or space complexity for finding long motifs, low precision in identification of weak motifs, or the OOPS constraint: one occurrence of the motif instance per sequence) which limit their scope of application. Results In this paper, we present a novel and fast algorithm we call TFBSGroup. It is based on community detection from a graph and is used to discover long and weak (l,d) motifs under the ZOMOPS constraint (zero, one or multiple occurrence(s) of the motif instance(s) per sequence), where l is the length of a motif and d is the maximum number of mutations between a motif instance and the motif itself. Firstly, TFBSGroup transforms the (l, d) motif search in sequences to focus on the discovery of dense subgraphs within a graph. It identifies these subgraphs using a fast community detection method for obtaining coarse-grained candidate motifs. Next, it greedily refines these candidate motifs towards the true motif within their own communities. Empirical studies on synthetic (l, d) samples have shown that TFBSGroup is very efficient (e.g., it can find true (18, 6), (24, 8) motifs within 30 seconds). More importantly, the algorithm has succeeded in rapidly identifying motifs in a large data set of prokaryotic promoters generated from the Escherichia coli database RegulonDB. The algorithm has also accurately identified motifs in ChIP-seq data sets for 12 mouse transcription factors involved in ES cell pluripotency and self-renewal. Conclusions Our novel heuristic algorithm, TFBSGroup, is able to quickly identify nearly exact matches for long

  7. A fast image matching algorithm based on key points

    NASA Astrophysics Data System (ADS)

    Wang, Huilin; Wang, Ying; An, Ru; Yan, Peng

    2014-05-01

    Image matching is a very important technique in image processing. It has been widely used for object recognition and tracking, image retrieval, three-dimensional vision, change detection, aircraft position estimation, and multi-image registration. Based on the requirements of matching algorithm for craft navigation, such as speed, accuracy and adaptability, a fast key point image matching method is investigated and developed. The main research tasks includes: (1) Developing an improved celerity key point detection approach using self-adapting threshold of Features from Accelerated Segment Test (FAST). A method of calculating self-adapting threshold was introduced for images with different contrast. Hessian matrix was adopted to eliminate insecure edge points in order to obtain key points with higher stability. This approach in detecting key points has characteristics of small amount of computation, high positioning accuracy and strong anti-noise ability; (2) PCA-SIFT is utilized to describe key point. 128 dimensional vector are formed based on the SIFT method for the key points extracted. A low dimensional feature space was established by eigenvectors of all the key points, and each eigenvector was projected onto the feature space to form a low dimensional eigenvector. These key points were re-described by dimension-reduced eigenvectors. After reducing the dimension by the PCA, the descriptor was reduced to 20 dimensions from the original 128. This method can reduce dimensions of searching approximately near neighbors thereby increasing overall speed; (3) Distance ratio between the nearest neighbour and second nearest neighbour searching is regarded as the measurement criterion for initial matching points from which the original point pairs matched are obtained. Based on the analysis of the common methods (e.g. RANSAC (random sample consensus) and Hough transform cluster) used for elimination false matching point pairs, a heuristic local geometric restriction

  8. A fast and memory-sparing probabilistic selection algorithm for the GPU

    SciTech Connect

    Monroe, Laura M; Wendelberger, Joanne; Michalak, Sarah

    2010-09-29

    A fast and memory-sparing probabilistic top-N selection algorithm is implemented on the GPU. This probabilistic algorithm gives a deterministic result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces both the memory requirements and the average time required for the algorithm. This algorithm is well-suited to more general parallel processors with multiple layers of memory hierarchy. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be especially useful for processors having a limited amount of fast memory available.

  9. Fast three-step phase-shifting algorithm

    SciTech Connect

    Huang, Peisen S.; Zhang Song

    2006-07-20

    We propose a new three-step phase-shifting algorithm, which is much faster than the traditional three-step algorithm. We achieve the speed advantage by using a simple intensity ratio function to replace the arc tangent function in the traditional algorithm. The phase error caused by this new algorithm is compensated for by use of a lookup table. Our experimental result sshow that both the new algorithm and the traditional algorithm generate similar results, but the new algorithm is 3.4 times faster. By implementing this new algorithm in a high-resolution, real-time three-dimensional shape measurement system,we were able to achieve a measurement speed of 40 frames per second ata resolution of 532x500 pixels, all with an ordinary personal computer.

  10. Comparative Analysis of CNV Calling Algorithms: Literature Survey and a Case Study Using Bovine High-Density SNP Data

    PubMed Central

    Xu, Lingyang; Hou, Yali; Bickhart, Derek M.; Song, Jiuzhou; Liu, George E.

    2013-01-01

    Copy number variations (CNVs) are gains and losses of genomic sequence between two individuals of a species when compared to a reference genome. The data from single nucleotide polymorphism (SNP) microarrays are now routinely used for genotyping, but they also can be utilized for copy number detection. Substantial progress has been made in array design and CNV calling algorithms and at least 10 comparison studies in humans have been published to assess them. In this review, we first survey the literature on existing microarray platforms and CNV calling algorithms. We then examine a number of CNV calling tools to evaluate their impacts using bovine high-density SNP data. Large incongruities in the results from different CNV calling tools highlight the need for standardizing array data collection, quality assessment and experimental validation. Only after careful experimental design and rigorous data filtering can the impacts of CNVs on both normal phenotypic variability and disease susceptibility be fully revealed.

  11. Fast algorithm for probabilistic bone edge detection (FAPBED)

    NASA Astrophysics Data System (ADS)

    Scepanovic, Danilo; Kirshtein, Joshua; Jain, Ameet K.; Taylor, Russell H.

    2005-04-01

    The registration of preoperative CT to intra-operative reality systems is a crucial step in Computer Assisted Orthopedic Surgery (CAOS). The intra-operative sensors include 3D digitizers, fiducials, X-rays and Ultrasound (US). FAPBED is designed to process CT volumes for registration to tracked US data. Tracked US is advantageous because it is real time, noninvasive, and non-ionizing, but it is also known to have inherent inaccuracies which create the need to develop a framework that is robust to various uncertainties, and can be useful in US-CT registration. Furthermore, conventional registration methods depend on accurate and absolute segmentation. Our proposed probabilistic framework addresses the segmentation-registration duality, wherein exact segmentation is not a prerequisite to achieve accurate registration. In this paper, we develop a method for fast and automatic probabilistic bone surface (edge) detection in CT images. Various features that influence the likelihood of the surface at each spatial coordinate are combined using a simple probabilistic framework, which strikes a fair balance between a high-level understanding of features in an image and the low-level number crunching of standard image processing techniques. The algorithm evaluates different features for detecting the probability of a bone surface at each voxel, and compounds the results of these methods to yield a final, low-noise, probability map of bone surfaces in the volume. Such a probability map can then be used in conjunction with a similar map from tracked intra-operative US to achieve accurate registration. Eight sample pelvic CT scans were used to extract feature parameters and validate the final probability maps. An un-optimized fully automatic Matlab code runs in five minutes per CT volume on average, and was validated by comparison against hand-segmented gold standards. The mean probability assigned to nonzero surface points was 0.8, while nonzero non-surface points had a mean

  12. Fast algorithm for detecting community structure in networks

    NASA Astrophysics Data System (ADS)

    Newman, M. E.

    2004-06-01

    Many networks display community structure—groups of vertices within which connections are dense but between which they are sparser—and sensitive computer algorithms have in recent years been developed for detecting this structure. These algorithms, however, are computationally demanding, which limits their application to small networks. Here we describe an algorithm which gives excellent results when tested on both computer-generated and real-world networks and is much faster, typically thousands of times faster, than previous algorithms. We give several example applications, including one to a collaboration network of more than 50 000 physicists.

  13. Fast single-pass alignment and variant calling using sequencing data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Sequencing research requires efficient computation. Few programs use already known information about DNA variants when aligning sequence data to the reference map. New program findmap.f90 reads the previous variant list before aligning sequence, calling variant alleles, and summing the allele counts...

  14. A fast algorithm for reconstruction of spectrally sparse signals in super-resolution

    NASA Astrophysics Data System (ADS)

    Cai, Jian-Feng; Liu, Suhui; Xu, Weiyu

    2015-08-01

    We propose a fast algorithm to reconstruct spectrally sparse signals from a small number of randomly observed time domain samples. Different from conventional compressed sensing where frequencies are discretized, we consider the super-resolution case where the frequencies can be any values in the normalized continuous frequency domain [0; 1). We first convert our signal recovery problem into a low rank Hankel matrix completion problem, for which we then propose an efficient feasible point algorithm named projected Wirtinger gradient algorithm(PWGA). The algorithm can be further accelerated by a scheme inspired by the fast iterative shrinkage-thresholding algorithm (FISTA). Numerical experiments are provided to illustrate the effectiveness of our proposed algorithm. Different from earlier approaches, our algorithm can solve problems of large scale efficiently.

  15. Fast Optimal Load Balancing Algorithms for 1D Partitioning

    SciTech Connect

    Pinar, Ali; Aykanat, Cevdet

    2002-12-09

    One-dimensional decomposition of nonuniform workload arrays for optimal load balancing is investigated. The problem has been studied in the literature as ''chains-on-chains partitioning'' problem. Despite extensive research efforts, heuristics are still used in parallel computing community with the ''hope'' of good decompositions and the ''myth'' of exact algorithms being hard to implement and not runtime efficient. The main objective of this paper is to show that using exact algorithms instead of heuristics yields significant load balance improvements with negligible increase in preprocessing time. We provide detailed pseudocodes of our algorithms so that our results can be easily reproduced. We start with a review of literature on chains-on-chains partitioning problem. We propose improvements on these algorithms as well as efficient implementation tips. We also introduce novel algorithms, which are asymptotically and runtime efficient. We experimented with data sets from two different applications: Sparse matrix computations and Direct volume rendering. Experiments showed that the proposed algorithms are 100 times faster than a single sparse-matrix vector multiplication for 64-way decompositions on average. Experiments also verify that load balance can be significantly improved by using exact algorithms instead of heuristics. These two findings show that exact algorithms with efficient implementations discussed in this paper can effectively replace heuristics.

  16. A fast and convergent stochastic MLP learning algorithm.

    PubMed

    Sakurai, A

    2001-12-01

    We propose a stochastic learning algorithm for multilayer perceptrons of linear-threshold function units, which theoretically converges with probability one and experimentally exhibits 100% convergence rate and remarkable speed on parity and classification problems with typical generalization accuracy. For learning the n bit parity function with n hidden units, the algorithm converged on all the trials we tested (n=2 to 12) after 5.8 x 4.1(n) presentations for 0.23 x 4.0(n-6) seconds on a 533MHz Alpha 21164A chip on average, which is five to ten times faster than Levenberg-Marquardt algorithm with restarts. For a medium size classification problem known as Thyroid in UCI repository, the algorithm is faster in speed and comparative in generalization accuracy than the standard backpropagation and Levenberg-Marquardt algorithms. PMID:11852440

  17. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm. PMID:27610308

  18. Low-Complexity Saliency Detection Algorithm for Fast Perceptual Video Coding

    PubMed Central

    Liu, Pengyu; Jia, Kebin

    2013-01-01

    A low-complexity saliency detection algorithm for perceptual video coding is proposed; low-level encoding information is adopted as the characteristics of visual perception analysis. Firstly, this algorithm employs motion vector (MV) to extract temporal saliency region through fast MV noise filtering and translational MV checking procedure. Secondly, spatial saliency region is detected based on optimal prediction mode distributions in I-frame and P-frame. Then, it combines the spatiotemporal saliency detection results to define the video region of interest (VROI). The simulation results validate that the proposed algorithm can avoid a large amount of computation work in the visual perception characteristics analysis processing compared with other existing algorithms; it also has better performance in saliency detection for videos and can realize fast saliency detection. It can be used as a part of the video standard codec at medium-to-low bit-rates or combined with other algorithms in fast video coding. PMID:24489495

  19. Development of Fast Algorithms Using Recursion, Nesting and Iterations for Computational Electromagnetics

    NASA Technical Reports Server (NTRS)

    Chew, W. C.; Song, J. M.; Lu, C. C.; Weedon, W. H.

    1995-01-01

    In the first phase of our work, we have concentrated on laying the foundation to develop fast algorithms, including the use of recursive structure like the recursive aggregate interaction matrix algorithm (RAIMA), the nested equivalence principle algorithm (NEPAL), the ray-propagation fast multipole algorithm (RPFMA), and the multi-level fast multipole algorithm (MLFMA). We have also investigated the use of curvilinear patches to build a basic method of moments code where these acceleration techniques can be used later. In the second phase, which is mainly reported on here, we have concentrated on implementing three-dimensional NEPAL on a massively parallel machine, the Connection Machine CM-5, and have been able to obtain some 3D scattering results. In order to understand the parallelization of codes on the Connection Machine, we have also studied the parallelization of 3D finite-difference time-domain (FDTD) code with PML material absorbing boundary condition (ABC). We found that simple algorithms like the FDTD with material ABC can be parallelized very well allowing us to solve within a minute a problem of over a million nodes. In addition, we have studied the use of the fast multipole method and the ray-propagation fast multipole algorithm to expedite matrix-vector multiplication in a conjugate-gradient solution to integral equations of scattering. We find that these methods are faster than LU decomposition for one incident angle, but are slower than LU decomposition when many incident angles are needed as in the monostatic RCS calculations.

  20. Vectorized Rebinning Algorithm for Fast Data Down-Sampling

    NASA Technical Reports Server (NTRS)

    Dean, Bruce; Aronstein, David; Smith, Jeffrey

    2013-01-01

    A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.

  1. A fast recursive algorithm for molecular dynamics simulation

    NASA Technical Reports Server (NTRS)

    Jain, A.; Vaidehi, N.; Rodriguez, G.

    1993-01-01

    The present recursive algorithm for solving molecular systems' dynamical equations of motion employs internal variable models that reduce such simulations' computation time by an order of magnitude, relative to Cartesian models. Extensive use is made of spatial operator methods recently developed for analysis and simulation of the dynamics of multibody systems. A factor-of-450 speedup over the conventional O(N-cubed) algorithm is demonstrated for the case of a polypeptide molecule with 400 residues.

  2. Fast algorithm for automatically computing Strahler stream order

    USGS Publications Warehouse

    Lanfear, Kenneth J.

    1990-01-01

    An efficient algorithm was developed to determine Strahler stream order for segments of stream networks represented in a Geographic Information System (GIS). The algorithm correctly assigns Strahler stream order in topologically complex situations such as braided streams and multiple drainage outlets. Execution time varies nearly linearly with the number of stream segments in the network. This technique is expected to be particularly useful for studying the topology of dense stream networks derived from digital elevation model data.

  3. Fast parallel algorithms for short-range molecular dynamics

    SciTech Connect

    Plimpton, S.

    1993-05-01

    Three parallel algorithms for classical molecular dynamics are presented. The first assigns each processor a subset of atoms; the second assigns each a subset of inter-atomic forces to compute; the third assigns each a fixed spatial region. The algorithms are suitable for molecular dynamics models which can be difficult to parallelize efficiently -- those with short-range forces where the neighbors of each atom change rapidly. They can be implemented on any distributed-memory parallel machine which allows for message-passing of data between independently executing processors. The algorithms are tested on a standard Lennard-Jones benchmark problem for system sizes ranging from 500 to 10,000,000 atoms on three parallel supercomputers, the nCUBE 2, Intel iPSC/860, and Intel Delta. Comparing the results to the fastest reported vectorized Cray Y-MP and C90 algorithm shows that the current generation of parallel machines is competitive with conventional vector supercomputers even for small problems. For large problems, the spatial algorithm achieves parallel efficiencies of 90% and the Intel Delta performs about 30 times faster than a single Y-MP processor and 12 times faster than a single C90 processor. Trade-offs between the three algorithms and guidelines for adapting them to more complex molecular dynamics simulations are also discussed.

  4. A fast neural-network algorithm for VLSI cell placement.

    PubMed

    Aykanat, Cevdet; Bultan, Tevfik; Haritaoğlu, Ismail

    1998-12-01

    Cell placement is an important phase of current VLSI circuit design styles such as standard cell, gate array, and Field Programmable Gate Array (FPGA). Although nondeterministic algorithms such as Simulated Annealing (SA) were successful in solving this problem, they are known to be slow. In this paper, a neural network algorithm is proposed that produces solutions as good as SA in substantially less time. This algorithm is based on Mean Field Annealing (MFA) technique, which was successfully applied to various combinatorial optimization problems. A MFA formulation for the cell placement problem is derived which can easily be applied to all VLSI design styles. To demonstrate that the proposed algorithm is applicable in practice, a detailed formulation for the FPGA design style is derived, and the layouts of several benchmark circuits are generated. The performance of the proposed cell placement algorithm is evaluated in comparison with commercial automated circuit design software Xilinx Automatic Place and Route (APR) which uses SA technique. Performance evaluation is conducted using ACM/SIGDA Design Automation benchmark circuits. Experimental results indicate that the proposed MFA algorithm produces comparable results with APR. However, MFA is almost 20 times faster than APR on the average. PMID:12662737

  5. [Fast segmentation algorithm of high resolution remote sensing image based on multiscale mean shift].

    PubMed

    Wang, Lei-Guang; Zheng, Chen; Lin, Li-Yu; Chen, Rong-Yuan; Mei, Tian-Can

    2011-01-01

    Mean Shift algorithm is a robust approach toward feature space analysis and it has been used wildly for natural scene image and medical image segmentation. However, high computational complexity of the algorithm has constrained its application in remote sensing images with massive information. A fast image segmentation algorithm is presented by extending traditional mean shift method to wavelet domain. In order to evaluate the effectiveness of the proposed algorithm, multispectral remote sensing image and synthetic image are utilized. The results show that the proposed algorithm can improve the speed 5-7 times compared to the traditional MS method in the premise of segmentation quality assurance. PMID:21428083

  6. A new hybrid algorithm for computing a fast discrete Fourier transform

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1979-01-01

    In this paper for certain long transform lengths, Winograd's algorithm for computing the discrete Fourier transform (DFT) is extended considerably. This is accomplished by performing the cyclic convolution, required by Winograd's method, with the Mersenne prime number-theoretic transform developed originally by Rader. This new algorithm requires fewer multiplications than either the standard fast Fourier transform (FFT) or Winograd's more conventional algorithm. However, more additions are required.

  7. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    DOEpatents

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  8. Simple, fast codebook training algorithm by entropy sequence for vector quantization

    NASA Astrophysics Data System (ADS)

    Pang, Chao-yang; Yao, Shaowen; Qi, Zhang; Sun, Shi-xin; Liu, Jingde

    2001-09-01

    The traditional training algorithm for vector quantization such as the LBG algorithm uses the convergence of distortion sequence as the condition of the end of algorithm. We presented a novel training algorithm for vector quantization in this paper. The convergence of the entropy sequence of each region sequence is employed as the condition of the end of the algorithm. Compared with the famous LBG algorithm, it is simple, fast and easy to be comprehended and controlled. We test the performance of the algorithm by typical test image Lena and Barb. The result shows that the PSNR difference between the algorithm and LBG is less than 0.1dB, but the running time of it is at most one second of LBG.

  9. metilene: fast and sensitive calling of differentially methylated regions from bisulfite sequencing data.

    PubMed

    Jühling, Frank; Kretzmer, Helene; Bernhart, Stephan H; Otto, Christian; Stadler, Peter F; Hoffmann, Steve

    2016-02-01

    The detection of differentially methylated regions (DMRs) is a necessary prerequisite for characterizing different epigenetic states. We present a novel program, metilene, to identify DMRs within whole-genome and targeted data with unrivaled specificity and sensitivity. A binary segmentation algorithm combined with a two-dimensional statistical test allows the detection of DMRs in large methylation experiments with multiple groups of samples in minutes rather than days using off-the-shelf hardware. metilene outperforms other state-of-the-art tools for low coverage data and can estimate missing data. Hence, metilene is a versatile tool to study the effect of epigenetic modifications in differentiation/development, tumorigenesis, and systems biology on a global, genome-wide level. Whether in the framework of international consortia with dozens of samples per group, or even without biological replicates, it produces highly significant and reliable results. PMID:26631489

  10. Outline of a fast hardware implementation of Winograd's DFT algorithm

    NASA Technical Reports Server (NTRS)

    Zohar, S.

    1980-01-01

    The main characteristics of the discrete Fourier transform (DFT) algorithm considered by Winograd (1976) is a significant reduction in the number of multiplications. Its primary disadvantage is a higher structural complexity. It is, therefore, difficult to translate the reduced number of multiplications into faster execution of the DFT by means of a software implementation of the algorithm. For this reason, a hardware implementation is considered in the current study, taking into account a design based on the algorithm prescription discussed by Zohar (1979). The hardware implementation of a FORTRAN subroutine is proposed, giving attention to a pipelining scheme in which 5 consecutive data batches are being operated on simultaneously, each batch undergoing one of 5 processing phases.

  11. BFL: a node and edge betweenness based fast layout algorithm for large scale networks

    PubMed Central

    Hashimoto, Tatsunori B; Nagasaki, Masao; Kojima, Kaname; Miyano, Satoru

    2009-01-01

    Background Network visualization would serve as a useful first step for analysis. However, current graph layout algorithms for biological pathways are insensitive to biologically important information, e.g. subcellular localization, biological node and graph attributes, or/and not available for large scale networks, e.g. more than 10000 elements. Results To overcome these problems, we propose the use of a biologically important graph metric, betweenness, a measure of network flow. This metric is highly correlated with many biological phenomena such as lethality and clusters. We devise a new fast parallel algorithm calculating betweenness to minimize the preprocessing cost. Using this metric, we also invent a node and edge betweenness based fast layout algorithm (BFL). BFL places the high-betweenness nodes to optimal positions and allows the low-betweenness nodes to reach suboptimal positions. Furthermore, BFL reduces the runtime by combining a sequential insertion algorim with betweenness. For a graph with n nodes, this approach reduces the expected runtime of the algorithm to O(n2) when considering edge crossings, and to O(n log n) when considering only density and edge lengths. Conclusion Our BFL algorithm is compared against fast graph layout algorithms and approaches requiring intensive optimizations. For gene networks, we show that our algorithm is faster than all layout algorithms tested while providing readability on par with intensive optimization algorithms. We achieve a 1.4 second runtime for a graph with 4000 nodes and 12000 edges on a standard desktop computer. PMID:19146673

  12. A fast non-local image denoising algorithm

    NASA Astrophysics Data System (ADS)

    Dauwe, A.; Goossens, B.; Luong, H. Q.; Philips, W.

    2008-02-01

    In this paper we propose several improvements to the original non-local means algorithm introduced by Buades et al. which obtains state-of-the-art denoising results. The strength of this algorithm is to exploit the repetitive character of the image in order to denoise the image unlike conventional denoising algorithms, which typically operate in a local neighbourhood. Due to the enormous amount of weight computations, the original algorithm has a high computational cost. An improvement of image quality towards the original algorithm is to ignore the contributions from dissimilar windows. Even though their weights are very small at first sight, the new estimated pixel value can be severely biased due to the many small contributions. This bad influence of dissimilar windows can be eliminated by setting their corresponding weights to zero. Using the preclassification based on the first three statistical moments, only contributions from similar neighborhoods are computed. To decide whether a window is similar or dissimilar, we will derive thresholds for images corrupted with additive white Gaussian noise. Our accelerated approach is further optimized by taking advantage of the symmetry in the weights, which roughly halves the computation time, and by using a lookup table to speed up the weight computations. Compared to the original algorithm, our proposed method produces images with increased PSNR and better visual performance in less computation time. Our proposed method even outperforms state-of-the-art wavelet denoising techniques in both visual quality and PSNR values for images containing a lot of repetitive structures such as textures: the denoised images are much sharper and contain less artifacts. The proposed optimizations can also be applied in other image processing tasks which employ the concept of repetitive structures such as intra-frame super-resolution or detection of digital image forgery.

  13. Gradient maintenance: A new algorithm for fast online replanning

    SciTech Connect

    Ahunbay, Ergun E. Li, X. Allen

    2015-06-15

    Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by

  14. Fast algorithms for transport models. Technical progress report

    SciTech Connect

    Manteuffel, T.A.

    1992-12-01

    The objective of this project is the development of numerical solution techniques for deterministic models of the transport of neutral and charged particles and the demonstration of their effectiveness in both a production environment and on advanced architecture computers. The primary focus is on various versions of the linear Boltzman equation. These equations are fundamental in many important applications. This project is an attempt to integrate the development of numerical algorithms with the process of developing production software. A major thrust of this reject will be the implementation of these algorithms on advanced architecture machines that reside at the Advanced Computing Laboratory (ACL) at Los Alamos National Laboratories (LANL).

  15. A Simple and Fast Spline Filtering Algorithm for Surface Metrology

    PubMed Central

    Zhang, Hao; Ott, Daniel; Song, John; Tong, Mingsi; Chu, Wei

    2015-01-01

    Spline filters and their corresponding robust filters are commonly used filters recommended in ISO (the International Organization for Standardization) standards for surface evaluation. Generally, these linear and non-linear spline filters, composed of symmetric, positive-definite matrices, are solved in an iterative fashion based on a Cholesky decomposition. They have been demonstrated to be relatively efficient, but complicated and inconvenient to implement. A new spline-filter algorithm is proposed by means of the discrete cosine transform or the discrete Fourier transform. The algorithm is conceptually simple and very convenient to implement. PMID:26958443

  16. Fast Quantum Algorithms for Numerical Integrals and Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    We discuss quantum algorithms that calculate numerical integrals and descriptive statistics of stochastic processes. With either of two distinct approaches, one obtains an exponential speed increase in comparison to the fastest known classical deterministic algotithms and a quadratic speed increase incomparison to classical Monte Carlo methods.

  17. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  18. Fast algorithms for combustion kinetics calculations: A comparison

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1984-01-01

    To identify the fastest algorithm currently available for the numerical integration of chemical kinetic rate equations, several algorithms were examined. Findings to date are summarized. The algorithms examined include two general-purpose codes EPISODE and LSODE and three special-purpose (for chemical kinetic calculations) codes CHEMEQ, CRK1D, and GCKP84. In addition, an explicit Runge-Kutta-Merson differential equation solver (IMSL Routine DASCRU) is used to illustrate the problems associated with integrating chemical kinetic rate equations by a classical method. Algorithms were applied to two test problems drawn from combustion kinetics. These problems included all three combustion regimes: induction, heat release and equilibration. Variations of the temperature and species mole fraction are given with time for test problems 1 and 2, respectively. Both test problems were integrated over a time interval of 1 ms in order to obtain near-equilibration of all species and temperature. Of the codes examined in this study, only CREK1D and GCDP84 were written explicitly for integrating exothermic, non-isothermal combustion rate equations. These therefore have built-in procedures for calculating the temperature.

  19. An Iterative CT Reconstruction Algorithm for Fast Fluid Flow Imaging.

    PubMed

    Van Eyndhoven, Geert; Batenburg, K Joost; Kazantsev, Daniil; Van Nieuwenhove, Vincent; Lee, Peter D; Dobson, Katherine J; Sijbers, Jan

    2015-11-01

    The study of fluid flow through solid matter by computed tomography (CT) imaging has many applications, ranging from petroleum and aquifer engineering to biomedical, manufacturing, and environmental research. To avoid motion artifacts, current experiments are often limited to slow fluid flow dynamics. This severely limits the applicability of the technique. In this paper, a new iterative CT reconstruction algorithm for improved a temporal/spatial resolution in the imaging of fluid flow through solid matter is introduced. The proposed algorithm exploits prior knowledge in two ways. First, the time-varying object is assumed to consist of stationary (the solid matter) and dynamic regions (the fluid flow). Second, the attenuation curve of a particular voxel in the dynamic region is modeled by a piecewise constant function over time, which is in accordance with the actual advancing fluid/air boundary. Quantitative and qualitative results on different simulation experiments and a real neutron tomography data set show that, in comparison with the state-of-the-art algorithms, the proposed algorithm allows reconstruction from substantially fewer projections per rotation without image quality loss. Therefore, the temporal resolution can be substantially increased, and thus fluid flow experiments with faster dynamics can be performed. PMID:26259219

  20. A local fast marching-based diffusion tensor image registration algorithm by simultaneously considering spatial deformation and tensor orientation.

    PubMed

    Xue, Zhong; Li, Hai; Guo, Lei; Wong, Stephen T C

    2010-08-01

    It is a key step to spatially align diffusion tensor images (DTI) to quantitatively compare neural images obtained from different subjects or the same subject at different timepoints. Different from traditional scalar or multi-channel image registration methods, tensor orientation should be considered in DTI registration. Recently, several DTI registration methods have been proposed in the literature, but deformation fields are purely dependent on the tensor features not the whole tensor information. Other methods, such as the piece-wise affine transformation and the diffeomorphic non-linear registration algorithms, use analytical gradients of the registration objective functions by simultaneously considering the reorientation and deformation of tensors during the registration. However, only relatively local tensor information such as voxel-wise tensor-similarity is utilized. This paper proposes a new DTI image registration algorithm, called local fast marching (FM)-based simultaneous registration. The algorithm not only considers the orientation of tensors during registration but also utilizes the neighborhood tensor information of each voxel to drive the deformation, and such neighborhood tensor information is extracted from a local fast marching algorithm around the voxels of interest. These local fast marching-based tensor features efficiently reflect the diffusion patterns around each voxel within a spherical neighborhood and can capture relatively distinctive features of the anatomical structures. Using simulated and real DTI human brain data the experimental results show that the proposed algorithm is more accurate compared with the FA-based registration and is more efficient than its counterpart, the neighborhood tensor similarity-based registration. PMID:20382233

  1. A Fast MEANSHIFT Algorithm-Based Target Tracking System

    PubMed Central

    Sun, Jian

    2012-01-01

    Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397

  2. A Fast Implementation of the ISODATA Clustering Algorithm

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.; Netanyahu, Nathan S.; LeMoigne, Jacqueline

    2005-01-01

    Clustering is central to many image processing and remote sensing applications. ISODATA is one of the most popular and widely used clustering methods in geoscience applications, but it can run slowly, particularly with large data sets. We present a more efficient approach to ISODATA clustering, which achieves better running times by storing the points in a kd-tree and through a modification of the way in which the algorithm estimates the dispersion of each cluster. We also present an approximate version of the algorithm which allows the user to further improve the running time, at the expense of lower fidelity in computing the nearest cluster center to each point. We provide both theoretical and empirical justification that our modified approach produces clusterings that are very similar to those produced by the standard ISODATA approach. We also provide empirical studies on both synthetic data and remotely sensed Landsat and MODIS images that show that our approach has significantly lower running times.

  3. Compressed Sensing Photoacoustic Imaging Based on Fast Alternating Direction Algorithm

    PubMed Central

    Liu, Xueyan; Peng, Dong; Guo, Wei; Ma, Xibo; Yang, Xin; Tian, Jie

    2012-01-01

    Photoacoustic imaging (PAI) has been employed to reconstruct endogenous optical contrast present in tissues. At the cost of longer calculations, a compressive sensing reconstruction scheme can achieve artifact-free imaging with fewer measurements. In this paper, an effective acceleration framework using the alternating direction method (ADM) was proposed for recovering images from limited-view and noisy observations. Results of the simulation demonstrated that the proposed algorithm could perform favorably in comparison to two recently introduced algorithms in computational efficiency and data fidelity. In particular, it ran considerably faster than these two methods. PAI with ADM can improve convergence speed with fewer ultrasonic transducers, enabling a high-performance and cost-effective PAI system for biomedical applications. PMID:23365553

  4. Fast algorithms for improved speech coding and recognition

    NASA Astrophysics Data System (ADS)

    Turner, J. M.; Morf, M.; Stirling, W.; Shynk, J.; Huang, S. S.

    1983-12-01

    This research effort has studied estimation techniques for processes that contain Gaussian noise and jump components, and classification methods for transitional signals by using recursive estimation with vector quantization. The major accomplishments presented are an algorithm for joint estimation of excitation and vocal tract response, a pitch pulse location method using recursive least squares estimation, and a stop consonant recognition method using recursive estimation and vector quantization.

  5. Fast automatic algorithm for bifurcation detection in vascular CTA scans

    NASA Astrophysics Data System (ADS)

    Brozio, Matthias; Gorbunova, Vladlena; Godenschwager, Christian; Beck, Thomas; Bernhardt, Dominik

    2012-02-01

    Endovascular imaging aims at identifying vessels and their branches. Automatic vessel segmentation and bifurcation detection eases both clinical research and routine work. In this article a state of the art bifurcation detection algorithm is developed and applied on vascular computed tomography angiography (CTA) scans to mark the common iliac artery and its branches, the internal and external iliacs. In contrast to other methods our algorithm does not rely on a complete segmentation of a vessel in the 3D volume, but evaluates the cross-sections of the vessel slice by slice. Candidates for vessels are obtained by thresholding, following by 2D connected component labeling and prefiltering by size and position. The remaining candidates are connected in a squared distanced weighted graph. With Dijkstra algorithm the graph is traversed to get candidates for the arteries. We use another set of features considering length and shape of the paths to determine the best candidate and detect the bifurcation. The method was tested on 119 datasets acquired with different CT scanners and varying protocols. Both easy to evaluate datasets with high resolution and no apparent clinical diseases and difficult ones with low resolution, major calcifications, stents or poor contrast between the vessel and surrounding tissue were included. The presented results are promising, in 75.7% of the cases the bifurcation was labeled correctly, and in 82.7% the common artery and one of its branches were assigned correctly. The computation time was on average 0.49 s +/- 0.28 s, close to human interaction time, which makes the algorithm applicable for time-critical applications.

  6. Fast motion prediction algorithm for multiview video coding

    NASA Astrophysics Data System (ADS)

    Abdelazim, Abdelrahman; Zhang, Guang Y.; Mein, Stephen J.; Varley, Martin R.; Ait-Boudaoud, Djamel

    2011-06-01

    Multiview Video Coding (MVC) is an extension to the H.264/MPEG-4 AVC video compression standard developed with joint efforts by MPEG/VCEG to enable efficient encoding of sequences captured simultaneously from multiple cameras using a single video stream. Therefore the design is aimed at exploiting inter-view dependencies in addition to reducing temporal redundancies. However, this further increases the overall encoding complexity In this paper, the high correlation between a macroblock and its enclosed partitions is utilised to estimate motion homogeneity, and based on the result inter-view prediction is selectively enabled or disabled. Moreover, if the MVC is divided into three layers in terms of motion prediction; the first being the full and sub-pixel motion search, the second being the mode selection process and the third being repetition of the first and second for inter-view prediction, the proposed algorithm significantly reduces the complexity in the three layers. To assess the proposed algorithm, a comprehensive set of experiments were conducted. The results show that the proposed algorithm significantly reduces the motion estimation time whilst maintaining similar Rate Distortion performance, when compared to both the H.264/MVC reference software and recently reported work.

  7. Constant Modulus Algorithm with Reduced Complexity Employing DFT Domain Fast Filtering

    NASA Astrophysics Data System (ADS)

    Yang, Yoon Gi; Lee, Chang Su; Yang, Soo Mi

    In this paper, a novel CMA (constant modulus algorithm) algorithm employing fast convolution in the DFT (discrete Fourier transform) domain is proposed. We propose a non-linear adaptation algorithm that minimizes CMA cost function in the DFT domain. The proposed algorithm is completely new one as compared to the recently introduced similar DFT domain CMA algorithm in that, the original CMA cost function has not been changed to develop DFT domain algorithm, resulting improved convergence properties. Using the proposed approach, we can reduce the number of multiplications to O(N log 2 N), whereas the conventional CMA has the computation order of O(N2). Simulation results show that the proposed algorithm provides a comparable performance to the conventional CMA.

  8. Fast time-reversible algorithms for molecular dynamics of rigid-body systems.

    PubMed

    Kajima, Yasuhiro; Hiyama, Miyabi; Ogata, Shuji; Kobayashi, Ryo; Tamura, Tomoyuki

    2012-06-21

    In this paper, we present time-reversible simulation algorithms for rigid bodies in the quaternion representation. By advancing a time-reversible algorithm [Y. Kajima, M. Hiyama, S. Ogata, and T. Tamura, J. Phys. Soc. Jpn. 80, 114002 (2011)] that requires iterations in calculating the angular velocity at each time step, we propose two kinds of iteration-free fast time-reversible algorithms. They are easily implemented in codes. The codes are compared with that of existing algorithms through demonstrative simulation of a nanometer-sized water droplet to find their stability of the total energy and computation speeds. PMID:22779579

  9. A fast algorithm for the calculation of junction capacitance and its application for impurity profile determination.

    NASA Technical Reports Server (NTRS)

    De Man, H. J. J.

    1972-01-01

    A fast algorithm is described which calculates the space charge layer width and junction capacitance for an arbitrary impurity profile and for plane, cylindrical and spherical junctions. The algorithm is based on the abrupt space charge edge (ASCE) approximation. A method to use the algorithm for the determination of impurity profiles for two-sided junctions is presented. An expression is derived for the built-in voltage to be used for capacitance calculations with the ASCE approximation. Experimental evidence is given that the algorithm permits very accurate capacitance calculations and also predicts the exact temperature dependence of the junction capacitance.

  10. A fast algorithm for nonlinear finite element analysis using equivalent magnetization current

    NASA Astrophysics Data System (ADS)

    Lee, Joon-Ho; Park, Il-Han; Kim, Dong-Hun; Lee, Ki-Sik

    2002-05-01

    A fast algorithm for iterative nonlinear finite element analysis is presented in this paper. The algorithm replaces updated permeability by an equivalent magnetization current and moves it to the source current term. Once the initial system matrix is decomposed in the LU form, the iterative procedure involves the trivial step of back-substitution from the LU form. Consequently, the computation time for the nonlinear analysis is greatly reduced. A numerical model of a cylindrical conductor enclosed with saturable iron is tested to validate the proposed algorithm. Numerical results are compared with those obtained using conventional Newton-Raphson algorithm in respect to accuracy and computational time.

  11. Fast and accurate image recognition algorithms for fresh produce food safety sensing

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin; Kang, Sukwon; Lefcourt, Alan M.

    2011-06-01

    This research developed and evaluated the multispectral algorithms derived from hyperspectral line-scan fluorescence imaging under violet LED excitation for detection of fecal contamination on Golden Delicious apples. The algorithms utilized the fluorescence intensities at four wavebands, 680 nm, 684 nm, 720 nm, and 780 nm, for computation of simple functions for effective detection of contamination spots created on the apple surfaces using four concentrations of aqueous fecal dilutions. The algorithms detected more than 99% of the fecal spots. The effective detection of feces showed that a simple multispectral fluorescence imaging algorithm based on violet LED excitation may be appropriate to detect fecal contamination on fast-speed apple processing lines.

  12. AMY-tree: an algorithm to use whole genome SNP calling for Y chromosomal phylogenetic applications

    PubMed Central

    2013-01-01

    Background Due to the rapid progress of next-generation sequencing (NGS) facilities, an explosion of human whole genome data will become available in the coming years. These data can be used to optimize and to increase the resolution of the phylogenetic Y chromosomal tree. Moreover, the exponential growth of known Y chromosomal lineages will require an automatic determination of the phylogenetic position of an individual based on whole genome SNP calling data and an up to date Y chromosomal tree. Results We present an automated approach, ‘AMY-tree’, which is able to determine the phylogenetic position of a Y chromosome using a whole genome SNP profile, independently from the NGS platform and SNP calling program, whereby mistakes in the SNP calling or phylogenetic Y chromosomal tree are taken into account. Moreover, AMY-tree indicates ambiguities within the present phylogenetic tree and points out new Y-SNPs which may be phylogenetically relevant. The AMY-tree software package was validated successfully on 118 whole genome SNP profiles of 109 males with different origins. Moreover, support was found for an unknown recurrent mutation, wrong reported mutation conversions and a large amount of new interesting Y-SNPs. Conclusions Therefore, AMY-tree is a useful tool to determine the Y lineage of a sample based on SNP calling, to identify Y-SNPs with yet unknown phylogenetic position and to optimize the Y chromosomal phylogenetic tree in the future. AMY-tree will not add lineages to the existing phylogenetic tree of the Y-chromosome but it is the first step to analyse whole genome SNP profiles in a phylogenetic framework. PMID:23405914

  13. Quadrant architecture for fast in-place algorithms

    SciTech Connect

    Besslich, P.W.; Kurowski, J.O.

    1983-10-01

    The architecture proposed is tailored to support Radix-2/sup k/ based in-place processing of pictorial data. The algorithms make use of signal-flow graphs to describe 2-dimensional in-place operations suitable for image processing. They may be executed on a general-purpose computer but may also be supported by a special parallel architecture. Major advantages of the scheme are in-place processing and parallel access to disjoint sections of memory only. A quadtree-like decomposition of the picture prevents blocking and queuing of private and common buses. 9 references.

  14. Fast Huffman encoding algorithms in MPEG-4 advanced audio coding

    NASA Astrophysics Data System (ADS)

    Brzuchalski, Grzegorz

    2014-11-01

    This paper addresses the optimisation problem of Huffman encoding in MPEG-4 Advanced Audio Coding stan- dard. At first, the Huffman encoding problem and the need of encoding two side info parameters scale factor and Huffman codebook are presented. Next, Two Loop Search, Maximum Noise Mask Ratio and Trellis Based algorithms of bit allocation are briefly described. Further, Huffman encoding optimisation are shown. New methods try to check and change scale factor bands as little as possible to estimate bitrate cost or its change. Finally, the complexity of old and new methods is calculated, compared and measured time of encoding is given.

  15. A fast algorithm for the simulation of arterial pulse waves

    NASA Astrophysics Data System (ADS)

    Du, Tao; Hu, Dan; Cai, David

    2016-06-01

    One-dimensional models have been widely used in studies of the propagation of blood pulse waves in large arterial trees. Under a periodic driving of the heartbeat, traditional numerical methods, such as the Lax-Wendroff method, are employed to obtain asymptotic periodic solutions at large times. However, these methods are severely constrained by the CFL condition due to large pulse wave speed. In this work, we develop a new numerical algorithm to overcome this constraint. First, we reformulate the model system of pulse wave propagation using a set of Riemann variables and derive a new form of boundary conditions at the inlet, the outlets, and the bifurcation points of the arterial tree. The new form of the boundary conditions enables us to design a convergent iterative method to enforce the boundary conditions. Then, after exchanging the spatial and temporal coordinates of the model system, we apply the Lax-Wendroff method in the exchanged coordinate system, which turns the large pulse wave speed from a liability to a benefit, to solve the wave equation in each artery of the model arterial system. Our numerical studies show that our new algorithm is stable and can perform ∼15 times faster than the traditional implementation of the Lax-Wendroff method under the requirement that the relative numerical error of blood pressure be smaller than one percent, which is much smaller than the modeling error.

  16. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  17. Fast algorithms for glassy materials: methods and explorations

    NASA Astrophysics Data System (ADS)

    Middleton, A. Alan

    2014-03-01

    Glassy materials with frozen disorder, including random magnets such as spin glasses and interfaces in disordered materials, exhibit striking non-equilibrium behavior such as the ability to store a history of external parameters (memory). Precisely due to their glassy nature, direct simulation of models of these materials is very slow. In some fortunate cases, however, algorithms exist that exactly compute thermodynamic quantities. Such cases include spin glasses in two dimensions and interfaces and random field magnets in arbitrary dimensions at zero temperature. Using algorithms built using ideas developed by computer scientists and mathematicians, one can even directly sample equilibrium configurations in very large systems, as if one picked the configurations out of a ``hat'' of all configurations weighted by their Boltzmann factors. This talk will provide some of the background for these methods and discuss the connections between physics and computer science, as used by a number of groups. Recent applications of these methods to investigating phase transitions in glassy materials and to answering qualitative questions about the free energy landscape and memory effects will be discussed. This work was supported in part by NSF grant DMR-1006731. Creighton Thomas and David Huse also contributed to much of the work to be presented.

  18. Fast computation of Lagrangian coherent structures: algorithms and error analysis

    NASA Astrophysics Data System (ADS)

    Brunton, Steven; Rowley, Clarence

    2009-11-01

    This work investigates a number of efficient methods for computing finite time Lyapunov exponent (FTLE) fields in unsteady flows by approximating the particle flow map and eliminating redundant particle integrations in neighboring flow maps. Ridges of the FTLE fields are Lagrangian coherent structures (LCS) and provide an unsteady analogue of invariant manifolds from dynamical systems theory. The fast methods fall into two categories, unidirectional and bidirectional, depending on whether flow maps in one or both time directions are composed to form an approximate flow map. An error analysis is presented which shows that the unidirectional methods are accurate while the bidirectional methods have significant error which is aligned with the opposite time coherent structures. This relies on the fact that material from the positive time LCS attracts onto the negative time LCS near time-dependent saddle points.

  19. Compiling fast partial derivatives of functions given by algorithms

    SciTech Connect

    Speelpenning, B.

    1980-01-01

    If the gradient of the function y = f(x/sub 1/,..., x/sub n/) is desired, where f is given by an algoritym Af(x, n, y), most numerical analysts will use numerical differencing. This is a sampling scheme that approximates derivatives by the slope of secants in closely spaced points. Symbolic methods that make full use of the program text of Af should be able to come up with a better way to evaluate the gradient of F. The system Jake described produces gradients significantly faster than numerical differencing. Jake can handle algorithms Af with arbitrary flow of control. Measurements performed on one particular machine suggest that Jake is faster than numerical differencing for n > 8. Somewhat weaker results were obtained for the problem of computing Jacobians of arbitrary shape.

  20. Ultra-fast fluence optimization for beam angle selection algorithms

    NASA Astrophysics Data System (ADS)

    Bangert, M.; Ziegenhein, P.; Oelfke, U.

    2014-03-01

    Beam angle selection (BAS) including fluence optimization (FO) is among the most extensive computational tasks in radiotherapy. Precomputed dose influence data (DID) of all considered beam orientations (up to 100 GB for complex cases) has to be handled in the main memory and repeated FOs are required for different beam ensembles. In this paper, the authors describe concepts accelerating FO for BAS algorithms using off-the-shelf multiprocessor workstations. The FO runtime is not dominated by the arithmetic load of the CPUs but by the transportation of DID from the RAM to the CPUs. On multiprocessor workstations, however, the speed of data transportation from the main memory to the CPUs is non-uniform across the RAM; every CPU has a dedicated memory location (node) with minimum access time. We apply a thread node binding strategy to ensure that CPUs only access DID from their preferred node. Ideal load balancing for arbitrary beam ensembles is guaranteed by distributing the DID of every candidate beam equally to all nodes. Furthermore we use a custom sorting scheme of the DID to minimize the overall data transportation. The framework is implemented on an AMD Opteron workstation. One FO iteration comprising dose, objective function, and gradient calculation takes between 0.010 s (9 beams, skull, 0.23 GB DID) and 0.070 s (9 beams, abdomen, 1.50 GB DID). Our overall FO time is < 1 s for small cases, larger cases take ~ 4 s. BAS runs including FOs for 1000 different beam ensembles take ~ 15-70 min, depending on the treatment site. This enables an efficient clinical evaluation of different BAS algorithms.

  1. [An improved fast algorithm for ray casting volume rendering of medical images].

    PubMed

    Tao, Ling; Wang, Huina; Tian, Zhiliang

    2006-10-01

    Ray casting algorithm can obtain better quality images in volume rendering, however, it presents some problems such as powerful computing capacity and slow rendering velocity. Therefore, a new fast algorithm of ray casting volume rendering is proposed in this paper. This algorithm reduces matrix computation by the matrix transformation characteristics of re-sampling points in two coordinate system, so re-sampled computational process is accelerated. By extending the Bresenham algorithm to three dimension and utilizing boundary box technique, this algorithm avoids the sampling in empty voxel and greatly improves the efficiency of ray casting. The experiment results show that the improved acceleration algorithm can produce the required quality images, at the same time reduces the total operations remarkably, and speeds up the volume rendering. PMID:17121341

  2. SIML: A Fast SIMD Algorithm for Calculating LINGO Chemical Similarities on GPUs and CPUs

    PubMed Central

    Haque, Imran S.; Walters, W. Patrick

    2010-01-01

    LINGOs are a holographic measure of chemical similarity based on text comparison of SMILES strings. We present a new algorithm for calculating LINGO similarities amenable to parallelization on SIMD architectures (such as GPUs and vector units of modern CPUs). We show that it is nearly 3 times as fast as existing algorithms on a CPU, and over 80 times faster than existing methods when run on a GPU. PMID:20218693

  3. A preliminary report on the development of MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We describe three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or N-way array. We present a tensor class for manipulating tensors which allows for tensor multiplication and 'matricization.' We have further added two classes for representing tensors in decomposed format: cp{_}tensor and tucker{_}tensor. We demonstrate the use of these classes by implementing several algorithms that have appeared in the literature.

  4. Fast algorithm for calculation of the moving tsunami wave height

    NASA Astrophysics Data System (ADS)

    Krivorotko, Olga; Kabanikhin, Sergey

    2014-05-01

    One of the most urgent problems of mathematical tsunami modeling is estimation of a tsunami wave height while a wave approaches to the coastal zone. There are two methods for solving this problem, namely, Airy-Green formula in one-dimensional case ° --- S(x) = S(0) 4 H(0)/H (x), and numerical solution of an initial-boundary value problem for linear shallow water equations ( { ηtt = div (gH (x,y)gradη), (x,y,t) ∈ ΩT := Ω ×(0,T); ( η|t=0 = q(x,y), ηt|t=0 = 0, (x,y ) ∈ Ω := (0,Lx)× (0,Ly ); (1) η|δΩT = 0. Here η(x,y,t) is the free water surface vertical displacement, H(x,y) is the depth at point (x,y), q(x,y) is the initial amplitude of a tsunami wave, S(x) is a moving tsunami wave height at point x. The main difficulty problem of tsunami modeling is a very big size of the computational domain ΩT. The calculation of the function η(x,y,t) of three variables in ΩT requires large computing resources. We construct a new algorithm to solve numerically the problem of determining the moving tsunami wave height which is based on kinematic-type approach and analytical representation of fundamental solution (2). The wave is supposed to be generated by the seismic fault of the bottom η(x,y,0) = g(y) ·θ(x), where θ(x) is a Heaviside theta-function. Let τ(x,y) be a solution of the eikonal equation 1 τ2x +τ2y = --, gH (x,y) satisfying initial conditions τ(0,y) = 0 and τx(0,y) = (gH (0,y))-1/2. Introducing new variables and new functions: ° -- z = τ(x,y), u(z,y,t) = ηt(x,y,t), b(z,y) = gH(x,y). We obtain an initial-boundary value problem in new variables from (1) ( 2 2 (2 bz- ) { utt = uzz + b uyy + 2b τyuzy + b(τxx + τyy) + 2b + 2bbyτy uz+ ( +2b(bzτy + by)uy, z,y- >2 0,t > 0,2 -1/2 u|t 0,t > 0. Then after some mathematical transformation we get the structure of the function u(x,y,t) in the form u(z,y,t) = S(z,y)·θ(t - z) + ˜u(z,y,t). (2) Here Å©(z,y,t) is a smooth function, S(z,y) is the solution of the problem: { S + b2τ S + (1b2(τ +

  5. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  6. Comparing precorrected-FFT and fast multipole algorithms for solving three-dimensional potential integral equations

    SciTech Connect

    White, J.; Phillips, J.R.; Korsmeyer, T.

    1994-12-31

    Mixed first- and second-kind surface integral equations with (1/r) and {partial_derivative}/{partial_derivative} (1/r) kernels are generated by a variety of three-dimensional engineering problems. For such problems, Nystroem type algorithms can not be used directly, but an expansion for the unknown, rather than for the entire integrand, can be assumed and the product of the singular kernal and the unknown integrated analytically. Combining such an approach with a Galerkin or collocation scheme for computing the expansion coefficients is a general approach, but generates dense matrix problems. Recently developed fast algorithms for solving these dense matrix problems have been based on multipole-accelerated iterative methods, in which the fast multipole algorithm is used to rapidly compute the matrix-vector products in a Krylov-subspace based iterative method. Another approach to rapidly computing the dense matrix-vector products associated with discretized integral equations follows more along the lines of a multigrid algorithm, and involves projecting the surface unknowns onto a regular grid, then computing using the grid, and finally interpolating the results from the regular grid back to the surfaces. Here, the authors describe a precorrectted-FFT approach which can replace the fast multipole algorithm for accelerating the dense matrix-vector product associated with discretized potential integral equations. The precorrected-FFT method, described below, is an order n log(n) algorithm, and is asymptotically slower than the order n fast multipole algorithm. However, initial experimental results indicate the method may have a significant constant factor advantage for a variety of engineering problems.

  7. General Structure Design for Fast Image Processing Algorithms Based upon FPGA DSP Slice

    NASA Astrophysics Data System (ADS)

    Wasfy, Wael; Zheng, Hong

    Increasing the speed and accuracy for a fast image processing algorithms during computing the image intensity for low level 3x3 algorithms with different kernel but having the same parallel calculation method is our target to achieve in this paper. FPGA is one of the fastest embedded systems that can be used for implementing the fast image processing image algorithms by using DSP slice module inside the FPGA we aimed to get the advantage of the DSP slice as a faster, accurate, higher number of bits in calculations and different calculated equation maneuver capabilities. Using a higher number of bits during algorithm calculations will lead to a higher accuracy compared with using the same image algorithm calculations with less number of bits, also reducing FPGA resources as minimum as we can and according to algorithm calculations needs is a very important goal to achieve. So in the recommended design we used as minimum DSP slice as we can and as a benefit of using DSP slice is higher calculations accuracy as the DSP capabilities of having 48 bit accuracy in addition and 18 x 18 bit accuracy in multiplication. For proofing the design, Gaussian filter and Sobelx edge detector image processing algorithms have been chosen to be implemented. Also we made a comparison with another design for proofing the improvements of the accuracy and speed of calculations, the other design as will be mentioned later on this paper is using maximum 12 bit accuracy in adding or multiplying calculations.

  8. Preliminary versions of the MATLAB tensor classes for fast algorithm prototyping.

    SciTech Connect

    Bader, Brett William; Kolda, Tamara Gibson

    2004-07-01

    We present the source code for three MATLAB classes for manipulating tensors in order to allow fast algorithm prototyping. A tensor is a multidimensional or Nway array. This is a supplementary report; details on using this code are provided separately in SAND-XXXX.

  9. A fast D.F.T. algorithm using complex integer transforms

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    Winograd (1976) has developed a new class of algorithms which depend heavily on the computation of a cyclic convolution for computing the conventional DFT (discrete Fourier transform); this new algorithm, for a few hundred transform points, requires substantially fewer multiplications than the conventional FFT algorithm. Reed and Truong have defined a special class of finite Fourier-like transforms over GF(q squared), where q = 2 to the p power minus 1 is a Mersenne prime for p = 2, 3, 5, 7, 13, 17, 19, 31, 61. In the present paper it is shown that Winograd's algorithm can be combined with the aforementioned Fourier-like transform to yield a new algorithm for computing the DFT. A fast method for accurately computing the DFT of a sequence of complex numbers of very long transform-lengths is thus obtained.

  10. A Fast Clustering Algorithm for Data with a Few Labeled Instances

    PubMed Central

    Yang, Jinfeng; Xiao, Yong; Wang, Jiabing; Ma, Qianli; Shen, Yanhua

    2015-01-01

    The diameter of a cluster is the maximum intracluster distance between pairs of instances within the same cluster, and the split of a cluster is the minimum distance between instances within the cluster and instances outside the cluster. Given a few labeled instances, this paper includes two aspects. First, we present a simple and fast clustering algorithm with the following property: if the ratio of the minimum split to the maximum diameter (RSD) of the optimal solution is greater than one, the algorithm returns optimal solutions for three clustering criteria. Second, we study the metric learning problem: learn a distance metric to make the RSD as large as possible. Compared with existing metric learning algorithms, one of our metric learning algorithms is computationally efficient: it is a linear programming model rather than a semidefinite programming model used by most of existing algorithms. We demonstrate empirically that the supervision and the learned metric can improve the clustering quality. PMID:25861252

  11. a Fast and Robust Algorithm for Road Edges Extraction from LIDAR Data

    NASA Astrophysics Data System (ADS)

    Qiu, Kaijin; Sun, Kai; Ding, Kou; Shu, Zhen

    2016-06-01

    Fast mapping of roads plays an important role in many geospatial applications, such as infrastructure planning, traffic monitoring, and driver assistance. How to extract various road edges fast and robustly is a challenging task. In this paper, we present a fast and robust algorithm for the automatic road edges extraction from terrestrial mobile LiDAR data. The algorithm is based on a key observation: most roads around edges have difference in elevation and road edges with pavement are seen in two different planes. In our algorithm, we firstly extract a rough plane based on RANSAC algorithm, and then multiple refined planes which only contains pavement are extracted from the rough plane. The road edges are extracted based on these refined planes. In practice, there is a serious problem that the rough and refined planes usually extracted badly due to rough roads and different density of point cloud. To eliminate the influence of rough roads, the technology which is similar with the difference of DSM (digital surface model) and DTM (digital terrain model) is used, and we also propose a method which adjust the point clouds to a similar density to eliminate the influence of different density. Experiments show the validities of the proposed method with multiple datasets (e.g. urban road, highway, and some rural road). We use the same parameters through the experiments and our algorithm can achieve real-time processing speeds.

  12. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks.

    PubMed

    Wang, Ting; Ren, Zhao; Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L; Sweet, Robert A; Wang, Jieru; Chen, Wei

    2016-02-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer's disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named "FastGGM". PMID:26872036

  13. Fast and precise algorithms for calculating offset correction in single photon counting ASICs built in deep sub-micron technologies

    NASA Astrophysics Data System (ADS)

    Maj, P.

    2014-07-01

    An important trend in the design of readout electronics working in the single photon counting mode for hybrid pixel detectors is to minimize the single pixel area without sacrificing its functionality. This is the reason why many digital and analog blocks are made with the smallest, or next to smallest, transistors possible. This causes a problem with matching among the whole pixel matrix which is acceptable by designers and, of course, it should be corrected with the use of dedicated circuitry, which, by the same rule of minimizing devices, suffers from the mismatch. Therefore, the output of such a correction circuit, controlled by an ultra-small area DAC, is not only a non-linear function, but it is also often non-monotonic. As long as it can be used for proper correction of the DC operation points inside each pixel, it is acceptable, but the time required for correction plays an important role for both chip verification and the design of a big, multi-chip system. Therefore, we present two algorithms: a precise one and a fast one. The first algorithm is based on the noise hits profiles obtained during so called threshold scan procedures. The fast correction procedure is based on the trim DACs scan and it takes less than a minute in a SPC detector systems consisting of several thousands of pixels.

  14. FctClus: A Fast Clustering Algorithm for Heterogeneous Information Networks.

    PubMed

    Yang, Jing; Chen, Limin; Zhang, Jianpei

    2015-01-01

    It is important to cluster heterogeneous information networks. A fast clustering algorithm based on an approximate commute time embedding for heterogeneous information networks with a star network schema is proposed in this paper by utilizing the sparsity of heterogeneous information networks. First, a heterogeneous information network is transformed into multiple compatible bipartite graphs from the compatible point of view. Second, the approximate commute time embedding of each bipartite graph is computed using random mapping and a linear time solver. All of the indicator subsets in each embedding simultaneously determine the target dataset. Finally, a general model is formulated by these indicator subsets, and a fast algorithm is derived by simultaneously clustering all of the indicator subsets using the sum of the weighted distances for all indicators for an identical target object. The proposed fast algorithm, FctClus, is shown to be efficient and generalizable and exhibits high clustering accuracy and fast computation speed based on a theoretic analysis and experimental verification. PMID:26090857

  15. Fast phase-added stereogram algorithm for generation of photorealistic 3D content.

    PubMed

    Kang, Hoonjong; Stoykova, Elena; Yoshikawa, Hiroshi

    2016-01-20

    A new phase-added stereogram algorithm for accelerated computation of holograms from a point cloud model is proposed. The algorithm relies on the hologram segmentation, sampling of directional information, and usage of the fast Fourier transform with a finer grid in the spatial frequency domain than is provided by the segment size. The algorithm gives improved quality of reconstruction due to new phase compensation introduced in the segment fringe patterns. The result is finer beam steering leading to high peak intensity and a large peak signal-to-noise ratio in reconstruction. The feasibility of the algorithm is checked by the generation of 3D contents for a color wavefront printer. PMID:26835945

  16. Evaluating the Influence of Quality Control Decisions and Software Algorithms on SNP Calling for the Affymetrix 6.0 SNP Array Platform

    PubMed Central

    de Andrade, Mariza; Atkinson, Elizabeth J.; Bamlet, William R.; Matsumoto, Martha E.; Maharjan, Sooraj; Slager, Susan L.; Vachon, Celine M.; Cunningham, Julie M.; Kardia, Sharon L.R.

    2011-01-01

    Objective Our goal was to evaluate the influence of quality control (QC) decisions using two genotype calling algorithms, CRLMM and Birdseed, designed for the Affymetrix SNP Array 6.0. Methods Various QC options were tried using the two algorithms and comparisons were made on subject and call rate and on association results using two data sets. Results For Birdseed, we recommend using the contrast QC instead of QC call rate for sample QC. For CRLMM, we recommend using the signal-to-noise rate ≥4 for sample QC and a posterior probability of 90% for genotype accuracy. For both algorithms, we recommend calling the genotype separately for each plate, and dropping SNPs with a lower call rate (<95%) before evaluating samples with lower call rates. To investigate whether the genotype calls from the two algorithms impacted the genome-wide association results, we performed association analysis using data from the GENOA cohort; we observed that the number of significant SNPs were similar using either CRLMM or Birdseed. Conclusions Using our suggested workflow both algorithms performed similarly; however, fewer samples were removed and CRLMM took half the time to run our 854 study samples (4.2 h) compared to Birdseed (8.4 h). PMID:21734406

  17. The Block V Receiver fast acquisition algorithm for the Galileo S-band mission

    NASA Technical Reports Server (NTRS)

    Aung, M.; Hurd, W. J.; Buu, C. M.; Berner, J. B.; Stephens, S. A.; Gevargiz, J. M.

    1994-01-01

    A fast acquisition algorithm for the Galileo suppressed carrier, subcarrier, and data symbol signals under low data rate, signal-to-noise ratio (SNR) and high carrier phase-noise conditions has been developed. The algorithm employs a two-arm fast Fourier transform (FFT) method utilizing both the in-phase and quadrature-phase channels of the carrier. The use of both channels results in an improved SNR in the FFT acquisition, enabling the use of a shorter FFT period over which the carrier instability is expected to be less significant. The use of a two-arm FFT also enables subcarrier and symbol acquisition before carrier acquisition. With the subcarrier and symbol loops locked first, the carrier can be acquired from an even shorter FFT period. Two-arm tracking loops are employed to lock the subcarrier and symbol loops parameter modification to achieve the final (high) loop SNR in the shortest time possible. The fast acquisition algorithm is implemented in the Block V Receiver (BVR). This article describes the complete algorithm design, the extensive computer simulation work done for verification of the design and the analysis, implementation issues in the BVR, and the acquisition times of the algorithm. In the expected case of the Galileo spacecraft at Jupiter orbit insertion PD/No equals 14.6 dB-Hz, R(sym) equals 16 symbols per sec, and the predicted acquisition time of the algorithm (to attain a 0.2-dB degradation from each loop to the output symbol SNR) is 38 sec.

  18. A fast and high performance multiple data integration algorithm for identifying human disease genes

    PubMed Central

    2015-01-01

    Background Integrating multiple data sources is indispensable in improving disease gene identification. It is not only due to the fact that disease genes associated with similar genetic diseases tend to lie close with each other in various biological networks, but also due to the fact that gene-disease associations are complex. Although various algorithms have been proposed to identify disease genes, their prediction performances and the computational time still should be further improved. Results In this study, we propose a fast and high performance multiple data integration algorithm for identifying human disease genes. A posterior probability of each candidate gene associated with individual diseases is calculated by using a Bayesian analysis method and a binary logistic regression model. Two prior probability estimation strategies and two feature vector construction methods are developed to test the performance of the proposed algorithm. Conclusions The proposed algorithm is not only generated predictions with high AUC scores, but also runs very fast. When only a single PPI network is employed, the AUC score is 0.769 by using F2 as feature vectors. The average running time for each leave-one-out experiment is only around 1.5 seconds. When three biological networks are integrated, the AUC score using F3 as feature vectors increases to 0.830, and the average running time for each leave-one-out experiment takes only about 12.54 seconds. It is better than many existing algorithms. PMID:26399620

  19. Temporal Gillespie Algorithm: Fast Simulation of Contagion Processes on Time-Varying Networks

    PubMed Central

    Vestergaard, Christian L.; Génois, Mathieu

    2015-01-01

    Stochastic simulations are one of the cornerstones of the analysis of dynamical processes on complex networks, and are often the only accessible way to explore their behavior. The development of fast algorithms is paramount to allow large-scale simulations. The Gillespie algorithm can be used for fast simulation of stochastic processes, and variants of it have been applied to simulate dynamical processes on static networks. However, its adaptation to temporal networks remains non-trivial. We here present a temporal Gillespie algorithm that solves this problem. Our method is applicable to general Poisson (constant-rate) processes on temporal networks, stochastically exact, and up to multiple orders of magnitude faster than traditional simulation schemes based on rejection sampling. We also show how it can be extended to simulate non-Markovian processes. The algorithm is easily applicable in practice, and as an illustration we detail how to simulate both Poissonian and non-Markovian models of epidemic spreading. Namely, we provide pseudocode and its implementation in C++ for simulating the paradigmatic Susceptible-Infected-Susceptible and Susceptible-Infected-Recovered models and a Susceptible-Infected-Recovered model with non-constant recovery rates. For empirical networks, the temporal Gillespie algorithm is here typically from 10 to 100 times faster than rejection sampling. PMID:26517860

  20. The fast simulated annealing algorithm applied to the search problem in LEED

    NASA Astrophysics Data System (ADS)

    Nascimento, V. B.; de Carvalho, V. E.; de Castilho, C. M. C.; Costa, B. V.; Soares, E. A.

    2001-07-01

    In this work we present new results obtained from the application of the fast simulated algorithm (FSA) to the surface structure determination of the Ag(1 1 0) and CdTe(1 1 0) systems. The influence of a control parameter, the "initial temperature", on the FSA search process was investigated. A scaling behaviour, that measures the efficiency of a search method as a function of the number of parameters to be varied, was obtained for the FSA algorithm, and indicated a favourable linear scaling ( N1).

  1. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    SciTech Connect

    Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-09-15

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  2. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    PubMed Central

    Chen, Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-01-01

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK’s interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization. PMID:20964213

  3. A fast algorithm for exact sequence search in biological sequences using polyphase decomposition

    PubMed Central

    Srikantha, Abhilash; Bopardikar, Ajit S.; Kaipa, Kalyan Kumar; Venkataraman, Parthasarathy; Lee, Kyusang; Ahn, TaeJin; Narayanan, Rangavittal

    2010-01-01

    Motivation: Exact sequence search allows a user to search for a specific DNA subsequence in a larger DNA sequence or database. It serves as a vital block in many areas such as Pharmacogenetics, Phylogenetics and Personal Genomics. As sequencing of genomic data becomes increasingly affordable, the amount of sequence data that must be processed will also increase exponentially. In this context, fast sequence search algorithms will play an important role in exploiting the information contained in the newly sequenced data. Many existing algorithms do not scale up well for large sequences or databases because of their high-computational costs. This article describes an efficient algorithm for performing fast searches on large DNA sequences. It makes use of hash tables of Q-grams that are constructed after downsampling the database, to enable efficient search and memory use. Time complexity for pattern search is reduced using beam pruning techniques. Theoretical complexity calculations and performance figures are presented to indicate the potential of the proposed algorithm. Contact: s.abhilash@samsung.com; ajit.b@samsung.com PMID:20823301

  4. A fast bilinear structure from motion algorithm using a video sequence and inertial sensors.

    PubMed

    Ramachandran, Mahesh; Veeraraghavan, Ashok; Chellappa, Rama

    2011-01-01

    In this paper, we study the benefits of the availability of a specific form of additional information—the vertical direction (gravity) and the height of the camera, both of which can be conveniently measured using inertial sensors and a monocular video sequence for 3D urban modeling. We show that in the presence of this information, the SfM equations can be rewritten in a bilinear form. This allows us to derive a fast, robust, and scalable SfM algorithm for large scale applications. The SfM algorithm developed in this paper is experimentally demonstrated to have favorable properties compared to the sparse bundle adjustment algorithm. We provide experimental evidence indicating that the proposed algorithm converges in many cases to solutions with lower error than state-of-art implementations of bundle adjustment. We also demonstrate that for the case of large reconstruction problems, the proposed algorithm takes lesser time to reach its solution compared to bundle adjustment. We also present SfM results using our algorithm on the Google StreetView research data set. PMID:20733224

  5. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors

    PubMed Central

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-01-01

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel’s type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms. PMID:27005632

  6. Fast Contour-Tracing Algorithm Based on a Pixel-Following Method for Image Sensors.

    PubMed

    Seo, Jonghoon; Chae, Seungho; Shim, Jinwook; Kim, Dongchul; Cheong, Cheolho; Han, Tack-Don

    2016-01-01

    Contour pixels distinguish objects from the background. Tracing and extracting contour pixels are widely used for smart/wearable image sensor devices, because these are simple and useful for detecting objects. In this paper, we present a novel contour-tracing algorithm for fast and accurate contour following. The proposed algorithm classifies the type of contour pixel, based on its local pattern. Then, it traces the next contour using the previous pixel's type. Therefore, it can classify the type of contour pixels as a straight line, inner corner, outer corner and inner-outer corner, and it can extract pixels of a specific contour type. Moreover, it can trace contour pixels rapidly because it can determine the local minimal path using the contour case. In addition, the proposed algorithm is capable of the compressing data of contour pixels using the representative points and inner-outer corner points, and it can accurately restore the contour image from the data. To compare the performance of the proposed algorithm to that of conventional techniques, we measure their processing time and accuracy. In the experimental results, the proposed algorithm shows better performance compared to the others. Furthermore, it can provide the compressed data of contour pixels and restore them accurately, including the inner-outer corner, which cannot be restored using conventional algorithms. PMID:27005632

  7. A fast, robust algorithm for power line interference cancellation in neural recording

    NASA Astrophysics Data System (ADS)

    Keshtkaran, Mohammad Reza; Yang, Zhi

    2014-04-01

    Objective. Power line interference may severely corrupt neural recordings at 50/60 Hz and harmonic frequencies. The interference is usually non-stationary and can vary in frequency, amplitude and phase. To retrieve the gamma-band oscillations at the contaminated frequencies, it is desired to remove the interference without compromising the actual neural signals at the interference frequency bands. In this paper, we present a robust and computationally efficient algorithm for removing power line interference from neural recordings. Approach. The algorithm includes four steps. First, an adaptive notch filter is used to estimate the fundamental frequency of the interference. Subsequently, based on the estimated frequency, harmonics are generated by using discrete-time oscillators, and then the amplitude and phase of each harmonic are estimated by using a modified recursive least squares algorithm. Finally, the estimated interference is subtracted from the recorded data. Main results. The algorithm does not require any reference signal, and can track the frequency, phase and amplitude of each harmonic. When benchmarked with other popular approaches, our algorithm performs better in terms of noise immunity, convergence speed and output signal-to-noise ratio (SNR). While minimally affecting the signal bands of interest, the algorithm consistently yields fast convergence (<100 ms) and substantial interference rejection (output SNR >30 dB) in different conditions of interference strengths (input SNR from -30 to 30 dB), power line frequencies (45-65 Hz) and phase and amplitude drifts. In addition, the algorithm features a straightforward parameter adjustment since the parameters are independent of the input SNR, input signal power and the sampling rate. A hardware prototype was fabricated in a 65 nm CMOS process and tested. Software implementation of the algorithm has been made available for open access at https://github.com/mrezak/removePLI. Significance. The proposed

  8. Contour detection and completion for inpainting and segmentation based on topological gradient and fast marching algorithms.

    PubMed

    Auroux, Didier; Cohen, Laurent D; Masmoudi, Mohamed

    2011-01-01

    We combine in this paper the topological gradient, which is a powerful method for edge detection in image processing, and a variant of the minimal path method in order to find connected contours. The topological gradient provides a more global analysis of the image than the standard gradient and identifies the main edges of an image. Several image processing problems (e.g., inpainting and segmentation) require continuous contours. For this purpose, we consider the fast marching algorithm in order to find minimal paths in the topological gradient image. This coupled algorithm quickly provides accurate and connected contours. We present then two numerical applications, to image inpainting and segmentation, of this hybrid algorithm. PMID:22194734

  9. A Fast Overlapping Community Detection Algorithm with Self-Correcting Ability

    PubMed Central

    Lu, Nan

    2014-01-01

    Due to the defects of all kinds of modularity, this paper defines a weighted modularity based on the density and cohesion as the new evaluation measurement. Since the proportion of the overlapping nodes in network is very low, the number of the nodes' repeat visits can be reduced by signing the vertices with the overlapping attributes. In this paper, we propose three test conditions for overlapping nodes and present a fast overlapping community detection algorithm with self-correcting ability, which is decomposed into two processes. Under the control of overlapping properties, the complexity of the algorithm tends to be approximate linear. And we also give a new understanding on membership vector. Moreover, we improve the bridgeness function which evaluates the extent of overlapping nodes. Finally, we conduct the experiments on three networks with well known community structures and the results verify the feasibility and effectiveness of our algorithm. PMID:24757434

  10. FastGGM: An Efficient Algorithm for the Inference of Gaussian Graphical Model in Biological Networks

    PubMed Central

    Ding, Ying; Fang, Zhou; Sun, Zhe; MacDonald, Matthew L.; Sweet, Robert A.; Wang, Jieru; Chen, Wei

    2016-01-01

    Biological networks provide additional information for the analysis of human diseases, beyond the traditional analysis that focuses on single variables. Gaussian graphical model (GGM), a probability model that characterizes the conditional dependence structure of a set of random variables by a graph, has wide applications in the analysis of biological networks, such as inferring interaction or comparing differential networks. However, existing approaches are either not statistically rigorous or are inefficient for high-dimensional data that include tens of thousands of variables for making inference. In this study, we propose an efficient algorithm to implement the estimation of GGM and obtain p-value and confidence interval for each edge in the graph, based on a recent proposal by Ren et al., 2015. Through simulation studies, we demonstrate that the algorithm is faster by several orders of magnitude than the current implemented algorithm for Ren et al. without losing any accuracy. Then, we apply our algorithm to two real data sets: transcriptomic data from a study of childhood asthma and proteomic data from a study of Alzheimer’s disease. We estimate the global gene or protein interaction networks for the disease and healthy samples. The resulting networks reveal interesting interactions and the differential networks between cases and controls show functional relevance to the diseases. In conclusion, we provide a computationally fast algorithm to implement a statistically sound procedure for constructing Gaussian graphical model and making inference with high-dimensional biological data. The algorithm has been implemented in an R package named “FastGGM”. PMID:26872036

  11. Fast randomized Hough transformation track initiation algorithm based on multi-scale clustering

    NASA Astrophysics Data System (ADS)

    Wan, Minjie; Gu, Guohua; Chen, Qian; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    A fast randomized Hough transformation track initiation algorithm based on multi-scale clustering is proposed to overcome existing problems in traditional infrared search and track system(IRST) which cannot provide movement information of the initial target and select the threshold value of correlation automatically by a two-dimensional track association algorithm based on bearing-only information . Movements of all the targets are presumed to be uniform rectilinear motion throughout this new algorithm. Concepts of space random sampling, parameter space dynamic linking table and convergent mapping of image to parameter space are developed on the basis of fast randomized Hough transformation. Considering the phenomenon of peak value clustering due to shortcomings of peak detection itself which is built on threshold value method, accuracy can only be ensured on condition that parameter space has an obvious peak value. A multi-scale idea is added to the above-mentioned algorithm. Firstly, a primary association is conducted to select several alternative tracks by a low-threshold .Then, alternative tracks are processed by multi-scale clustering methods , through which accurate numbers and parameters of tracks are figured out automatically by means of transforming scale parameters. The first three frames are processed by this algorithm in order to get the first three targets of the track , and then two slightly different gate radius are worked out , mean value of which is used to be the global threshold value of correlation. Moreover, a new model for curvilinear equation correction is applied to the above-mentioned track initiation algorithm for purpose of solving the problem of shape distortion when a space three-dimensional curve is mapped to a two-dimensional bearing-only space. Using sideways-flying, launch and landing as examples to build models and simulate, the application of the proposed approach in simulation proves its effectiveness , accuracy , and adaptivity

  12. A fast implementation of the incremental backprojection algorithms for parallel beam geometries

    SciTech Connect

    Chen, C.M.; Wang, C.Y.; Cho, Z.H.

    1996-12-01

    Filtered-backprojection algorithms are the most widely used approaches for reconstruction of computed tomographic (CT) images, such as X-ray CT and positron emission tomographic (PET) images. The Incremental backprojection algorithm is a fast backprojection approach based on restructuring the Shepp and Logan algorithm. By exploiting interdependency (position and values) of adjacent pixels, the Incremental algorithm requires only O(N) and O(N{sup 2}) multiplications in contrast to O(N{sup 2}) and O(N{sup 3}) multiplications for the Shepp and Logan algorithm in two-dimensional (2-D) and three-dimensional (3-D) backprojections, respectively, for each view, where N is the size of the image in each dimension. In addition, it may reduce the number of additions for each pixel computation. The improvement achieved by the Incremental algorithm in practice was not, however, as significant as expected. One of the main reasons is due to inevitably visiting pixels outside the beam in the searching flow scheme originally developed for the Incremental algorithm. To optimize implementation of the Incremental algorithm, an efficient scheme, namely, coded searching flow scheme, is proposed in this paper to minimize the overhead caused by searching for all pixels in a beam. The key idea of this scheme is to encode the searching flow for all pixels inside each beam. While backprojecting, all pixels may be visited without any overhead due to using the coded searching flow as the a priori information. The proposed coded searching flow scheme has been implemented on a Sun Sparc 10 and a Sun Sparc 20 workstations. The implementation results show that the proposed scheme is 1.45--2.0 times faster than the original searching flow scheme for most cases tested.

  13. Statistical iterative reconstruction using fast optimization transfer algorithm with successively increasing factor in Digital Breast Tomosynthesis

    NASA Astrophysics Data System (ADS)

    Xu, Shiyu; Zhang, Zhenxi; Chen, Ying

    2014-03-01

    Statistical iterative reconstruction exhibits particularly promising since it provides the flexibility of accurate physical noise modeling and geometric system description in transmission tomography system. However, to solve the objective function is computationally intensive compared to analytical reconstruction methods due to multiple iterations needed for convergence and each iteration involving forward/back-projections by using a complex geometric system model. Optimization transfer (OT) is a general algorithm converting a high dimensional optimization to a parallel 1-D update. OT-based algorithm provides a monotonic convergence and a parallel computing framework but slower convergence rate especially around the global optimal. Based on an indirect estimation on the spectrum of the OT convergence rate matrix, we proposed a successively increasing factor- scaled optimization transfer (OT) algorithm to seek an optimal step size for a faster rate. Compared to a representative OT based method such as separable parabolic surrogate with pre-computed curvature (PC-SPS), our algorithm provides comparable image quality (IQ) with fewer iterations. Each iteration retains a similar computational cost to PC-SPS. The initial experiment with a simulated Digital Breast Tomosynthesis (DBT) system shows that a total 40% computing time is saved by the proposed algorithm. In general, the successively increasing factor-scaled OT exhibits a tremendous potential to be a iterative method with a parallel computation, a monotonic and global convergence with fast rate.

  14. Fast instantaneous center of rotation estimation algorithm for a skied-steered robot

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2015-05-01

    Skid-steered robots are widely used as mobile platforms for machine vision systems. However it is hard to achieve a stable motion of such robots along desired trajectory due to an unpredictable wheel slip. It is possible to compensate the unpredictable wheel slip and stabilize the motion of the robot using visual odometry. This paper presents a fast optical flow based algorithm for estimation of instantaneous center of rotation, angular and longitudinal speed of the robot. The proposed algorithm is based on Horn-Schunck variational optical flow estimation method. The instantaneous center of rotation and motion of the robot is estimated by back projection of optical flow field to the ground surface. The developed algorithm was tested using skid-steered mobile robot. The robot is based on a mobile platform that includes two pairs of differential driven motors and a motor controller. Monocular visual odometry system consisting of a singleboard computer and a low cost webcam is mounted on the mobile platform. A state-space model of the robot was derived using standard black-box system identification. The input (commands) and the output (motion) were recorded using a dedicated external motion capture system. The obtained model was used to control the robot without visual odometry data. The paper is concluded with the algorithm quality estimation by comparison of the trajectories estimated by the algorithm with the data from motion capture system.

  15. Peak detection in fiber Bragg grating using a fast phase correlation algorithm

    NASA Astrophysics Data System (ADS)

    Lamberti, A.; Vanlanduit, S.; De Pauw, B.; Berghmans, F.

    2014-05-01

    Fiber Bragg grating sensing principle is based on the exact tracking of the peak wavelength location. Several peak detection techniques have already been proposed in literature. Among these, conventional peak detection (CPD) methods such as the maximum detection algorithm (MDA), do not achieve very high precision and accuracy, especially when the Signal to Noise Ratio (SNR) and the wavelength resolution are poor. On the other hand, recently proposed algorithms, like the cross-correlation demodulation algorithm (CCA), are more precise and accurate but require higher computational effort. To overcome these limitations, we developed a novel fast phase correlation algorithm (FPC) which performs as well as the CCA, being at the same time considerably faster. This paper presents the FPC technique and analyzes its performances for different SNR and wavelength resolutions. Using simulations and experiments, we compared the FPC with the MDA and CCA algorithms. The FPC detection capabilities were as precise and accurate as those of the CCA and considerably better than those of the CPD. The FPC computational time was up to 50 times lower than CCA, making the FPC a valid candidate for future implementation in real-time systems.

  16. Fast automated yeast cell counting algorithm using bright-field and fluorescence microscopic images

    PubMed Central

    2013-01-01

    Background The faithful determination of the concentration and viability of yeast cells is important for biological research as well as industry. To this end, it is important to develop an automated cell counting algorithm that can provide not only fast but also accurate and precise measurement of yeast cells. Results With the proposed method, we measured the precision of yeast cell measurements by using 0%, 25%, 50%, 75% and 100% viability samples. As a result, the actual viability measured with the proposed yeast cell counting algorithm is significantly correlated to the theoretical viability (R2 = 0.9991). Furthermore, we evaluated the performance of our algorithm in various computing platforms. The results showed that the proposed algorithm could be feasible to use with low-end computing platforms without loss of its performance. Conclusions Our yeast cell counting algorithm can rapidly provide the total number and the viability of yeast cells with exceptional accuracy and precision. Therefore, we believe that our method can become beneficial for a wide variety of academic field and industries such as biotechnology, pharmaceutical and alcohol production. PMID:24215650

  17. Fast algorithm for scaling analysis with higher-order detrending moving average method

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Yutaka; Miki, Yuki; Shimatani, Satoshi; Kiyono, Ken

    2016-05-01

    Among scaling analysis methods based on the root-mean-square deviation from the estimated trend, it has been demonstrated that centered detrending moving average (DMA) analysis with a simple moving average has good performance when characterizing long-range correlation or fractal scaling behavior. Furthermore, higher-order DMA has also been proposed; it is shown to have better detrending capabilities, removing higher-order polynomial trends than original DMA. However, a straightforward implementation of higher-order DMA requires a very high computational cost, which would prevent practical use of this method. To solve this issue, in this study, we introduce a fast algorithm for higher-order DMA, which consists of two techniques: (1) parallel translation of moving averaging windows by a fixed interval; (2) recurrence formulas for the calculation of summations. Our algorithm can significantly reduce computational cost. Monte Carlo experiments show that the computational time of our algorithm is approximately proportional to the data length, although that of the conventional algorithm is proportional to the square of the data length. The efficiency of our algorithm is also shown by a systematic study of the performance of higher-order DMA, such as the range of detectable scaling exponents and detrending capability for removing polynomial trends. In addition, through the analysis of heart-rate variability time series, we discuss possible applications of higher-order DMA.

  18. A fast rank-reduction algorithm for three-dimensional seismic data interpolation

    NASA Astrophysics Data System (ADS)

    Jia, Yongna; Yu, Siwei; Liu, Lina; Ma, Jianwei

    2016-09-01

    Rank-reduction methods have been successfully used for seismic data interpolation and noise attenuation. However, highly intense computation is required for singular value decomposition (SVD) in most rank-reduction methods. In this paper, we propose a simple yet efficient interpolation algorithm, which is based on the Hankel matrix, for randomly missing traces. Following the multichannel singular spectrum analysis (MSSA) technique, we first transform the seismic data into a low-rank block Hankel matrix for each frequency slice. Then, a fast orthogonal rank-one matrix pursuit (OR1MP) algorithm is employed to minimize the low-rank constraint of the block Hankel matrix. In the new algorithm, only the left and right top singular vectors are needed to be computed, thereby, avoiding the complexity of computation required for SVD. Thus, we improve the calculation efficiency significantly. Finally, we anti-average the rank-reduction block Hankel matrix and obtain the reconstructed data in the frequency domain. Numerical experiments on 3D seismic data show that the proposed interpolation algorithm provides much better performance than the traditional MSSA algorithm in computational speed, especially for large-scale data processing.

  19. Fast dose algorithm for generation of dose coverage probability for robustness analysis of fractionated radiotherapy

    NASA Astrophysics Data System (ADS)

    Tilly, David; Ahnesjö, Anders

    2015-07-01

    A fast algorithm is constructed to facilitate dose calculation for a large number of randomly sampled treatment scenarios, each representing a possible realisation of a full treatment with geometric, fraction specific displacements for an arbitrary number of fractions. The algorithm is applied to construct a dose volume coverage probability map (DVCM) based on dose calculated for several hundred treatment scenarios to enable the probabilistic evaluation of a treatment plan. For each treatment scenario, the algorithm calculates the total dose by perturbing a pre-calculated dose, separately for the primary and scatter dose components, for the nominal conditions. The ratio of the scenario specific accumulated fluence, and the average fluence for an infinite number of fractions is used to perturb the pre-calculated dose. Irregularities in the accumulated fluence may cause numerical instabilities in the ratio, which is mitigated by regularisation through convolution with a dose pencil kernel. Compared to full dose calculations the algorithm demonstrates a speedup factor of ~1000. The comparisons to full calculations show a 99% gamma index (2%/2 mm) pass rate for a single highly modulated beam in a virtual water phantom subject to setup errors during five fractions. The gamma comparison shows a 100% pass rate in a moving tumour irradiated by a single beam in a lung-like virtual phantom. DVCM iso-probability lines computed with the fast algorithm, and with full dose calculation for each of the fractions, for a hypo-fractionated prostate case treated with rotational arc therapy treatment were almost indistinguishable.

  20. Fast dose algorithm for generation of dose coverage probability for robustness analysis of fractionated radiotherapy.

    PubMed

    Tilly, David; Ahnesjö, Anders

    2015-07-21

    A fast algorithm is constructed to facilitate dose calculation for a large number of randomly sampled treatment scenarios, each representing a possible realisation of a full treatment with geometric, fraction specific displacements for an arbitrary number of fractions. The algorithm is applied to construct a dose volume coverage probability map (DVCM) based on dose calculated for several hundred treatment scenarios to enable the probabilistic evaluation of a treatment plan.For each treatment scenario, the algorithm calculates the total dose by perturbing a pre-calculated dose, separately for the primary and scatter dose components, for the nominal conditions. The ratio of the scenario specific accumulated fluence, and the average fluence for an infinite number of fractions is used to perturb the pre-calculated dose. Irregularities in the accumulated fluence may cause numerical instabilities in the ratio, which is mitigated by regularisation through convolution with a dose pencil kernel.Compared to full dose calculations the algorithm demonstrates a speedup factor of ~1000. The comparisons to full calculations show a 99% gamma index (2%/2 mm) pass rate for a single highly modulated beam in a virtual water phantom subject to setup errors during five fractions. The gamma comparison shows a 100% pass rate in a moving tumour irradiated by a single beam in a lung-like virtual phantom. DVCM iso-probability lines computed with the fast algorithm, and with full dose calculation for each of the fractions, for a hypo-fractionated prostate case treated with rotational arc therapy treatment were almost indistinguishable. PMID:26118844

  1. A Fast and Precise Indoor Localization Algorithm Based on an Online Sequential Extreme Learning Machine †

    PubMed Central

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-01

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics. PMID:25599427

  2. A hierarchical algorithm for fast Debye summation with applications to small angle scattering.

    PubMed

    Gumerov, Nail A; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani

    2012-09-30

    Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three-dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS), and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error-bound derived in this article is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386

  3. A fast and precise indoor localization algorithm based on an online sequential extreme learning machine.

    PubMed

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-01

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics. PMID:25599427

  4. Fast Parallel MR Image Reconstruction via B1-based, Adaptive Restart, Iterative Soft Thresholding Algorithms (BARISTA)

    PubMed Central

    Noll, Douglas C.; Fessler, Jeffrey A.

    2014-01-01

    Sparsity-promoting regularization is useful for combining compressed sensing assumptions with parallel MRI for reducing scan time while preserving image quality. Variable splitting algorithms are the current state-of-the-art algorithms for SENSE-type MR image reconstruction with sparsity-promoting regularization. These methods are very general and have been observed to work with almost any regularizer; however, the tuning of associated convergence parameters is a commonly-cited hindrance in their adoption. Conversely, majorize-minimize algorithms based on a single Lipschitz constant have been observed to be slow in shift-variant applications such as SENSE-type MR image reconstruction since the associated Lipschitz constants are loose bounds for the shift-variant behavior. This paper bridges the gap between the Lipschitz constant and the shift-variant aspects of SENSE-type MR imaging by introducing majorizing matrices in the range of the regularizer matrix. The proposed majorize-minimize methods (called BARISTA) converge faster than state-of-the-art variable splitting algorithms when combined with momentum acceleration and adaptive momentum restarting. Furthermore, the tuning parameters associated with the proposed methods are unitless convergence tolerances that are easier to choose than the constraint penalty parameters required by variable splitting algorithms. PMID:25330484

  5. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-07-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  6. A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations

    SciTech Connect

    Ouyang, G; Jandhyala, V; Champagne, N; Sharpe, R; Fasenfest, B J; Rockway, J D

    2004-12-14

    An Asymptotic Wave Expansion (AWE) technique is implemented into the EIGER computational electromagnetics code. The AWE fast frequency sweep is formed by separating the components of the integral equations by frequency dependence, then using this information to find a rational function approximation of the results. The standard AWE method is generalized to work for several integral equations, including the EFIE for conductors and the PMCHWT for dielectrics. The method is also expanded to work for two types of coupled circuit-EM problems as well as lumped load circuit elements. After a simple bisecting adaptive sweep algorithm is developed, dramatic speed improvements are seen for several example problems.

  7. Algorithms for Accurate and Fast Plotting of Contour Surfaces in 3D Using Hexahedral Elements

    NASA Astrophysics Data System (ADS)

    Singh, Chandan; Saini, Jaswinder Singh

    2016-05-01

    In the present study, Fast and accurate algorithms for the generation of contour surfaces in 3D are described using hexahedral elements which are popular in finite element analysis. The contour surfaces are described in the form of groups of boundaries of contour segments and their interior points are derived using the contour equation. The locations of contour boundaries and the interior points on contour surfaces are as accurate as the interpolation results obtained by hexahedral elements and thus there are no discrepancies between the analysis and visualization results.

  8. Lazy skip-lists: An algorithm for fast hybridization-expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Sémon, P.; Yee, Chuck-Hou; Haule, Kristjan; Tremblay, A.-M. S.

    2014-08-01

    The solution of a generalized impurity model lies at the heart of electronic structure calculations with dynamical mean field theory. In the strongly correlated regime, the method of choice for solving the impurity model is the hybridization-expansion continuous-time quantum Monte Carlo (CT-HYB). Enhancements to the CT-HYB algorithm are critical for bringing new physical regimes within reach of current computational power. Taking advantage of the fact that the bottleneck in the algorithm is a product of hundreds of matrices, we present optimizations based on the introduction and combination of two concepts of more general applicability: (a) skip lists and (b) fast rejection of proposed configurations based on matrix bounds. Considering two very different test cases with d electrons, we find speedups of ˜25 up to ˜500 compared to the direct evaluation of the matrix product. Even larger speedups are likely with f electron systems and with clusters of correlated atoms.

  9. ADaM: augmenting existing approximate fast matching algorithms with efficient and exact range queries

    PubMed Central

    2014-01-01

    Background Drug discovery, disease detection, and personalized medicine are fast-growing areas of genomic research. With the advancement of next-generation sequencing techniques, researchers can obtain an abundance of data for many different biological assays in a short period of time. When this data is error-free, the result is a high-quality base-pair resolution picture of the genome. However, when the data is lossy the heuristic algorithms currently used when aligning next-generation sequences causes the corresponding accuracy to drop. Results This paper describes a program, ADaM (APF DNA Mapper) which significantly increases final alignment accuracy. ADaM works by first using an existing program to align "easy" sequences, and then using an algorithm with accuracy guarantees (the APF) to align the remaining sequences. The final result is a technique that increases the mapping accuracy from only 60% to over 90% for harder-to-align sequences. PMID:25079667

  10. A fast high-order finite difference algorithm for pricing American options

    NASA Astrophysics Data System (ADS)

    Tangman, D. Y.; Gopaul, A.; Bhuruth, M.

    2008-12-01

    We describe an improvement of Han and Wu's algorithm [H. Han, X.Wu, A fast numerical method for the Black-Scholes equation of American options, SIAM J. Numer. Anal. 41 (6) (2003) 2081-2095] for American options. A high-order optimal compact scheme is used to discretise the transformed Black-Scholes PDE under a singularity separating framework. A more accurate free boundary location based on the smooth pasting condition and the use of a non-uniform grid with a modified tridiagonal solver lead to an efficient implementation of the free boundary value problem. Extensive numerical experiments show that the new finite difference algorithm converges rapidly and numerical solutions with good accuracy are obtained. Comparisons with some recently proposed methods for the American options problem are carried out to show the advantage of our numerical method.

  11. Hessian Schatten-norm regularization for CBCT image reconstruction using fast iterative shrinkage-thresholding algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xinxin; Wang, Jiang; Tan, Shan

    2015-03-01

    Statistical iterative reconstruction in Cone-beam computed tomography (CBCT) uses prior knowledge to form different kinds of regularization terms. The total variation (TV) regularization has shown state-of-the-art performance in suppressing noises and preserving edges. However, it produces the well-known staircase effect. In this paper, a method that involves second-order differential operators was employed to avoid the staircase effect. The ability to avoid staircase effect lies in that higher-order derivatives can avoid over-sharpening the regions of smooth intensity transitions. Meanwhile, a fast iterative shrinkage-thresholding algorithm was used for the corresponding optimization problem. The proposed Hessian Schatten norm-based regularization keeps lots of favorable properties of TV, such as translation and scale invariant, with getting rid of the staircase effect that appears in TV-based reconstructions. The experiments demonstrated the outstanding ability of the proposed algorithm over TV method especially in suppressing the staircase effect.

  12. Fast algorithm of 3D median filter for medical image despeckling

    NASA Astrophysics Data System (ADS)

    Xiong, Chengyi; Hou, Jianhua; Gao, Zhirong; He, Xiang; Chen, Shaoping

    2007-12-01

    Three-dimensional (3-D) median filtering is very useful to eliminate speckle noise from a medical imaging source, such as functional magnetic resonance imaging (fMRI) and ultrasonic imaging. 3-D median filtering is characterized by its higher computation complexity. N 3(N 3-1)/2 comparison operations would be required for 3-D median filtering with N×N×N window if the conventional bubble-sorting algorithm is adopted. In this paper, an efficient fast algorithm for 3-D median filtering was presented, which considerably reduced the computation complexity for extracting the median of a 3-D data array. Compared to the state-of-the-art, the proposed method could reduce the computation complexity of 3-D median filtering by 33%. It results in efficiently reducing the system delay of the 3-D median filter by software implementation, and the system cost and power consumption by hardware implementation.

  13. A fast two-step algorithm for invasion percolation with trapping

    NASA Astrophysics Data System (ADS)

    Masson, Yder

    2016-05-01

    I present a fast algorithm for modeling invasion percolation (IP) with trapping (TIP). IP is a numerical algorithm that models quasi-static (i.e. slow) fluid invasion in porous media. Trapping occurs when the invading fluid (that is injected) forms continuous surfaces surrounding patches of the displaced fluid (that is assumed incompressible and originally saturates the invaded medium). In TIP, the invading fluid is not allowed to enter the trapped patches. I demonstrate that TIP can be modeled in two steps: (1) Run an IP simulation without trapping (NTIP). (2) Identify the sites that invaded trapped regions and remove them from the chronological list of sites invaded in NTIP. Fast algorithms exist for solving NTIP. The focus of this paper is to propose an efficient solution for step (2). I show that it can be solved using a disjoint set data structure and going backward in time, i.e. by un-invading all sites invaded in NTIP in reverse order. Time reversal of the invasion greatly reduces the computational complexity for the identification of trapped sites as one only needs to investigate sites neighbor to the latest invaded/un-invaded site. This differs from traditional approaches where trapping is performed in real time, i.e. as the IP simulation is running, and where it is sometimes necessary to investigate the whole lattice to identify newly trapped regions. With the proposed algorithm, the total computational time for the identification and the removal of trapped sites goes as O(N), where N is the total number of sites in the lattice.

  14. Fast parallel algorithms and enumeration techniques for partial k-trees

    SciTech Connect

    Narayanan, C.

    1989-01-01

    Recent research by several authors have resulted in systematic way of developing linear-time sequential algorithms for a host of problem: on a fairly general class of graphs variously known as bounded decomposable graphs, graphs of bounded treewidth, partial k-trees, etc. Partial k-trees arise in a variety of real-life applications such as network reliability, VLSI design and database systems and hence fast sequential algorithms on these graphs have been found to be desirable. The linear-time methodologies were independently developed by Bern, Lawler, and Wong ((10)), Arnborg and Proskurowski ((6)), Bodlaender ((14)), and Courcelle ((25)). Wimer ((89)) significantly extended the work of Bern, Lawler and Wong. All of these approaches share the common thread of using dynamic programming on a tree structure. In particular the methodology of Wimer uses a parse-tree as the data structure. The methodologies claim linear-time algorithms on partial k-trees for fixed k, for a number of combinatorial optimization problems given the tree structure as input. It is known that obtaining the tree structure is NP-hard. This dissertation investigates three important classes of problems: (1) Developing parallel algorithms for constructing a k-tree embedding, finding a tree decomposition and most notably obtaining a parse-tree for a partial k-tree. (2) Developing parallel algorithms for parse-tree computations, testing isomorphism of k-trees, and finding a 2-tree embedding of a cactus. (3) Obtaining techniques for counting vertex/edge subsets satisfying a certain property in some classes of partial k-trees. The parallel algorithms the author has developed are in class NC and are either new or improve upon the existing results of Bodlaender (13). The difference equations he has obtained for counting certain sub-graphs are not known in the literature so far.

  15. Adaptation of a Fast Optimal Interpolation Algorithm to the Mapping of Oceangraphic Data

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris; Fieguth, Paul; Wunsch, Carl; Willsky, Alan

    1997-01-01

    A fast, recently developed, multiscale optimal interpolation algorithm has been adapted to the mapping of hydrographic and other oceanographic data. This algorithm produces solution and error estimates which are consistent with those obtained from exact least squares methods, but at a small fraction of the computational cost. Problems whose solution would be completely impractical using exact least squares, that is, problems with tens or hundreds of thousands of measurements and estimation grid points, can easily be solved on a small workstation using the multiscale algorithm. In contrast to methods previously proposed for solving large least squares problems, our approach provides estimation error statistics while permitting long-range correlations, using all measurements, and permitting arbitrary measurement locations. The multiscale algorithm itself, published elsewhere, is not the focus of this paper. However, the algorithm requires statistical models having a very particular multiscale structure; it is the development of a class of multiscale statistical models, appropriate for oceanographic mapping problems, with which we concern ourselves in this paper. The approach is illustrated by mapping temperature in the northeastern Pacific. The number of hydrographic stations is kept deliberately small to show that multiscale and exact least squares results are comparable. A portion of the data were not used in the analysis; these data serve to test the multiscale estimates. A major advantage of the present approach is the ability to repeat the estimation procedure a large number of times for sensitivity studies, parameter estimation, and model testing. We have made available by anonymous Ftp a set of MATLAB-callable routines which implement the multiscale algorithm and the statistical models developed in this paper.

  16. Parallelization of an Adaptive Multigrid Algorithm for Fast Solution of Finite Element Structural Problems

    SciTech Connect

    Crane, N K; Parsons, I D; Hjelmstad, K D

    2002-03-21

    Adaptive mesh refinement selectively subdivides the elements of a coarse user supplied mesh to produce a fine mesh with reduced discretization error. Effective use of adaptive mesh refinement coupled with an a posteriori error estimator can produce a mesh that solves a problem to a given discretization error using far fewer elements than uniform refinement. A geometric multigrid solver uses increasingly finer discretizations of the same geometry to produce a very fast and numerically scalable solution to a set of linear equations. Adaptive mesh refinement is a natural method for creating the different meshes required by the multigrid solver. This paper describes the implementation of a scalable adaptive multigrid method on a distributed memory parallel computer. Results are presented that demonstrate the parallel performance of the methodology by solving a linear elastic rocket fuel deformation problem on an SGI Origin 3000. Two challenges must be met when implementing adaptive multigrid algorithms on massively parallel computing platforms. First, although the fine mesh for which the solution is desired may be large and scaled to the number of processors, the multigrid algorithm must also operate on much smaller fixed-size data sets on the coarse levels. Second, the mesh must be repartitioned as it is adapted to maintain good load balancing. In an adaptive multigrid algorithm, separate mesh levels may require separate partitioning, further complicating the load balance problem. This paper shows that, when the proper optimizations are made, parallel adaptive multigrid algorithms perform well on machines with several hundreds of processors.

  17. A Fast Cluster Motif Finding Algorithm for ChIP-Seq Data Sets.

    PubMed

    Zhang, Yipu; Wang, Ping

    2015-01-01

    New high-throughput technique ChIP-seq, coupling chromatin immunoprecipitation experiment with high-throughput sequencing technologies, has extended the identification of binding locations of a transcription factor to the genome-wide regions. However, the most existing motif discovery algorithms are time-consuming and limited to identify binding motifs in ChIP-seq data which normally has the significant characteristics of large scale data. In order to improve the efficiency, we propose a fast cluster motif finding algorithm, named as FCmotif, to identify the (l,  d) motifs in large scale ChIP-seq data set. It is inspired by the emerging substrings mining strategy to find the enriched substrings and then searching the neighborhood instances to construct PWM and cluster motifs in different length. FCmotif is not following the OOPS model constraint and can find long motifs. The effectiveness of proposed algorithm has been proved by experiments on the ChIP-seq data sets from mouse ES cells. The whole detection of the real binding motifs and processing of the full size data of several megabytes finished in a few minutes. The experimental results show that FCmotif has advantageous to deal with the (l,  d) motif finding in the ChIP-seq data; meanwhile it also demonstrates better performance than other current widely-used algorithms such as MEME, Weeder, ChIPMunk, and DREME. PMID:26236718

  18. Combined algorithmic and GPU acceleration for ultra-fast circular conebeam backprojection

    NASA Astrophysics Data System (ADS)

    Brokish, Jeffrey; Sack, Paul; Bresler, Yoram

    2010-04-01

    In this paper, we describe the first implementation and performance of a fast O(N3logN) hierarchical backprojection algorithm for cone beam CT with a circular trajectory1,developed on a modern Graphics Processing Unit (GPU). The resulting tomographic backprojection system for 3D cone beam geometry combines speedup through algorithmic improvements provided by the hierarchical backprojection algorithm with speedup from a massively parallel hardware accelerator. For data parameters typical in diagnostic CT and using a mid-range GPU card, we report reconstruction speeds of up to 360 frames per second, and relative speedup of almost 6x compared to conventional backprojection on the same hardware. The significance of these results is twofold. First, they demonstrate that the reduction in operation counts demonstrated previously for the FHBP algorithm can be translated to a comparable run-time improvement in a massively parallel hardware implementation, while preserving stringent diagnostic image quality. Second, the dramatic speedup and throughput numbers achieved indicate the feasibility of systems based on this technology, which achieve real-time 3D reconstruction for state-of-the art diagnostic CT scanners with small footprint, high-reliability, and affordable cost.

  19. A Fast Hyperplane-Based Minimum-Volume Enclosing Simplex Algorithm for Blind Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Lin, Chia-Hsiang; Chi, Chong-Yung; Wang, Yu-Hsiang; Chan, Tsung-Han

    2016-04-01

    Hyperspectral unmixing (HU) is a crucial signal processing procedure to identify the underlying materials (or endmembers) and their corresponding proportions (or abundances) from an observed hyperspectral scene. A well-known blind HU criterion, advocated by Craig in early 1990's, considers the vertices of the minimum-volume enclosing simplex of the data cloud as good endmember estimates, and it has been empirically and theoretically found effective even in the scenario of no pure pixels. However, such kind of algorithms may suffer from heavy simplex volume computations in numerical optimization, etc. In this work, without involving any simplex volume computations, by exploiting a convex geometry fact that a simplest simplex of N vertices can be defined by N associated hyperplanes, we propose a fast blind HU algorithm, for which each of the N hyperplanes associated with the Craig's simplex of N vertices is constructed from N-1 affinely independent data pixels, together with an endmember identifiability analysis for its performance support. Without resorting to numerical optimization, the devised algorithm searches for the N(N-1) active data pixels via simple linear algebraic computations, accounting for its computational efficiency. Monte Carlo simulations and real data experiments are provided to demonstrate its superior efficacy over some benchmark Craig-criterion-based algorithms in both computational efficiency and estimation accuracy.

  20. The backpropagation algorithm in J, a fast prototyping tool for researching neural networks.

    PubMed

    Brouwer, R K

    1999-08-01

    This paper illustrates the use of a powerful language, called J, that is ideal for simulating neural networks. The use of J is demonstrated by its application to a gradient descent method for training a multilayer perceptron. It is also shown how the back-propagation algorithm can be easily generalized to multilayer networks without any increase in complexity and that the algorithm can be completely expressed in an array notation which is directly executable through J. J is a general purpose language, which means that its user is given a flexibility not available in neural network simulators or in software packages such as MATLAB. Yet, because of its numerous operators, J allows a very succinct code to be used, leading to a tremendous decrease in development time. PMID:10586987

  1. A fast rebinning algorithm for 3D positron emission tomography using John's equation

    NASA Astrophysics Data System (ADS)

    Defrise, Michel; Liu, Xuan

    1999-08-01

    Volume imaging in positron emission tomography (PET) requires the inversion of the three-dimensional (3D) x-ray transform. The usual solution to this problem is based on 3D filtered-backprojection (FBP), but is slow. Alternative methods have been proposed which factor the 3D data into independent 2D data sets corresponding to the 2D Radon transforms of a stack of parallel slices. Each slice is then reconstructed using 2D FBP. These so-called rebinning methods are numerically efficient but are approximate. In this paper a new exact rebinning method is derived by exploiting the fact that the 3D x-ray transform of a function is the solution to the second-order partial differential equation first studied by John. The method is proposed for two sampling schemes, one corresponding to a pair of infinite plane detectors and another one corresponding to a cylindrical multi-ring PET scanner. The new FORE-J algorithm has been implemented for this latter geometry and was compared with the approximate Fourier rebinning algorithm FORE and with another exact rebinning algorithm, FOREX. Results with simulated data demonstrate a significant improvement in accuracy compared to FORE, while the reconstruction time is doubled. Compared to FOREX, the FORE-J algorithm is slightly less accurate but more than three times faster.

  2. Applying the uniform resampling (URS) algorithm to a lissajous trajectory: fast image reconstruction with optimal gridding.

    PubMed

    Moriguchi, H; Wendt, M; Duerk, J L

    2000-11-01

    Various kinds of nonrectilinear Cartesian k-space trajectories have been studied, such as spiral, circular, and rosette trajectories. Although the nonrectilinear Cartesian sampling techniques generally have the advantage of fast data acquisition, the gridding process prior to 2D-FFT image reconstruction usually requires a number of additional calculations, thus necessitating an increase in the computation time. Further, the reconstructed image often exhibits artifacts resulting from both the k-space sampling pattern and the gridding procedure. To date, it has been demonstrated in only a few studies that the special geometric sampling patterns of certain specific trajectories facilitate fast image reconstruction. In other words, the inherent link among the trajectory, the sampling scheme, and the associated complexity of the regridding/reconstruction process has been investigated to only a limited extent. In this study, it is demonstrated that a Lissajous trajectory has the special geometric characteristics necessary for rapid reconstruction of nonrectilinear Cartesian k-space trajectories with constant sampling time intervals. Because of the applicability of a uniform resampling (URS) algorithm, a high-quality reconstructed image is obtained in a short reconstruction time when compared to other gridding algorithms. PMID:11064412

  3. The index-based subgraph matching algorithm (ISMA): fast subgraph enumeration in large networks using optimized search trees.

    PubMed

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  4. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    PubMed Central

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  5. A Fast Density-Based Clustering Algorithm for Real-Time Internet of Things Stream

    PubMed Central

    Ying Wah, Teh

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753

  6. A fast density-based clustering algorithm for real-time Internet of Things stream.

    PubMed

    Amini, Amineh; Saboohi, Hadi; Wah, Teh Ying; Herawan, Tutut

    2014-01-01

    Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets. PMID:25110753

  7. Fast parallel tracking algorithm for the muon detector of the CBM experiment at fair

    NASA Astrophysics Data System (ADS)

    Lebedev, A.; Höhne, C.; Kisel, I.; Ososkov, G.

    2010-07-01

    Particle trajectory recognition is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR accelerator at Darmstadt. The tracking algorithms have to process terabytes of input data produced in particle collisions. Therefore, the speed of the tracking software is extremely important for data analysis. In this contribution, a fast parallel track reconstruction algorithm which uses available features of modern processors is presented. These features comprise a SIMD instruction set (SSE) and multithreading. The first allows one to pack several data items into one register and to operate on all of them in parallel thus achieving more operations per cycle. The second feature enables the routines to exploit all available CPU cores and hardware threads. This parallel version of the tracking algorithm has been compared to the initial serial scalar version which uses a similar approach for tracking. A speed-up factor of 487 was achieved (from 730 to 1.5 ms/event) for a computer with 2 × Intel Core i7 processors at 2.66 GHz.

  8. A fast calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations

    NASA Astrophysics Data System (ADS)

    Fiorino, Steven T.; Elmore, Brannon; Schmidt, Jaclyn; Matchefts, Elizabeth; Burley, Jarred L.

    2016-05-01

    Properly accounting for multiple scattering effects can have important implications for remote sensing and possibly directed energy applications. For example, increasing path radiance can affect signal noise. This study describes the implementation of a fast-calculating two-stream-like multiple scattering algorithm that captures azimuthal and elevation variations into the Laser Environmental Effects Definition and Reference (LEEDR) atmospheric characterization and radiative transfer code. The multiple scattering algorithm fully solves for molecular, aerosol, cloud, and precipitation single-scatter layer effects with a Mie algorithm at every calculation point/layer rather than an interpolated value from a pre-calculated look-up-table. This top-down cumulative diffusivity method first considers the incident solar radiance contribution to a given layer accounting for solid angle and elevation, and it then measures the contribution of diffused energy from previous layers based on the transmission of the current level to produce a cumulative radiance that is reflected from a surface and measured at the aperture at the observer. Then a unique set of asymmetry and backscattering phase function parameter calculations are made which account for the radiance loss due to the molecular and aerosol constituent reflectivity within a level and allows for a more accurate characterization of diffuse layers that contribute to multiple scattered radiances in inhomogeneous atmospheres. The code logic is valid for spectral bands between 200 nm and radio wavelengths, and the accuracy is demonstrated by comparing the results from LEEDR to observed sky radiance data.

  9. Improving the quantitative testing of fast aspherics surfaces with null screen using Dijkstra algorithm

    NASA Astrophysics Data System (ADS)

    Moreno Oliva, Víctor Iván; Castañeda Mendoza, Álvaro; Campos García, Manuel; Díaz Uribe, Rufino

    2011-09-01

    The null screen is a geometric method that allows the testing of fast aspherical surfaces, this method measured the local slope at the surface and by numerical integration the shape of the surface is measured. The usual technique for the numerical evaluation of the surface is the trapezoidal rule, is well-known fact that the truncation error increases with the second power of the spacing between spots of the integration path. Those paths are constructed following spots reflected on the surface and starting in an initial select spot. To reduce the numerical errors in this work we propose the use of the Dijkstra algorithm.1 This algorithm can find the shortest path from one spot (or vertex) to another spot in a weighted connex graph. Using a modification of the algorithm it is possible to find the minimal path from one select spot to all others ones. This automates and simplifies the integration process in the test with null screens. In this work is shown the efficient proposed evaluating a previously surface with a traditional process.

  10. A New Fast Algorithm to Completely Account for Non-Lambertian Surface Reflection of The Earth

    NASA Technical Reports Server (NTRS)

    Qin, Wen-Han; Herman, Jay R.; Ahmad, Ziauddin; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Surface bidirectional reflectance distribution function (BRDF) influences not only radiance just about the surface, but that emerging from the top of the atmosphere (TOA). In this study we propose a new, fast and accurate, algorithm CASBIR (correction for anisotropic surface bidirectional reflection) to account for such influences on radiance measured above TOA. This new algorithm is based on a 4-stream theory that separates the radiation field into direct and diffuse components in both upwelling and downwelling directions. This is important because the direct component accounts for a substantial portion of incident radiation under a clear sky, and the BRDF effect is strongest in the reflection of the direct radiation reaching the surface. The model is validated by comparison with a full-scale, vector radiation transfer model for the atmosphere-surface system. The result demonstrates that CASBIR performs very well (with overall relative difference of less than one percent) for all solar and viewing zenith and azimuth angles considered in wavelengths from ultraviolet to near-infrared over three typical, but very different surface types. Application of this algorithm includes both accounting for non-Lambertian surface scattering on the emergent radiation above TOA and a potential approach for surface BRDF retrieval from satellite measured radiance.

  11. Fast algorithm for minutiae matching based on multiple-ridge information

    NASA Astrophysics Data System (ADS)

    Wang, Guoyou; Hu, Jing

    2001-09-01

    Autonomous real-time fingerprint verification, how to judge whether two fingerprints come from the same finger or not, is an important and difficult problem in AFIS (Automated Fingerprint Identification system). In addition to the nonlinear deformation, two fingerprints from the same finger may also be dissimilar due to translation or rotation, all these factors do make the dissimilarities more great and lead to misjudgment, thus the correct verification rate highly depends on the deformation degree. In this paper, we present a new fast simple algorithm for fingerprint matching, derived from the Chang et al.'s method, to solve the problem of optimal matches between two fingerprints under nonlinear deformation. The proposed algorithm uses not only the feature points of fingerprints but also the multiple information of the ridge to reduce the computational complexity in fingerprint verification. Experiments with a number of fingerprint images have shown that this algorithm has higher efficiency than the existing of methods due to the reduced searching operations.

  12. Automatic building detection based on Purposive FastICA (PFICA) algorithm using monocular high resolution Google Earth images

    NASA Astrophysics Data System (ADS)

    Ghaffarian, Saman; Ghaffarian, Salar

    2014-11-01

    This paper proposes an improved FastICA model named as Purposive FastICA (PFICA) with initializing by a simple color space transformation and a novel masking approach to automatically detect buildings from high resolution Google Earth imagery. ICA and FastICA algorithms are defined as Blind Source Separation (BSS) techniques for unmixing source signals using the reference data sets. In order to overcome the limitations of the ICA and FastICA algorithms and make them purposeful, we developed a novel method involving three main steps: 1-Improving the FastICA algorithm using Moore-Penrose pseudo inverse matrix model, 2-Automated seeding of the PFICA algorithm based on LUV color space and proposed simple rules to split image into three regions; shadow + vegetation, baresoil + roads and buildings, respectively, 3-Masking out the final building detection results from PFICA outputs utilizing the K-means clustering algorithm with two number of clusters and conducting simple morphological operations to remove noises. Evaluation of the results illustrates that buildings detected from dense and suburban districts with divers characteristics and color combinations using our proposed method have 88.6% and 85.5% overall pixel-based and object-based precision performances, respectively.

  13. From FastQ data to high confidence variant calls: the Genome Analysis Toolkit best practices pipeline

    PubMed Central

    Van der Auwera, Geraldine A.; Carneiro, Mauricio O.; Hartl, Chris; Poplin, Ryan; del Angel, Guillermo; Levy-Moonshine, Ami; Jordan, Tadeusz; Shakir, Khalid; Roazen, David; Thibault, Joel; Banks, Eric; Garimella, Kiran V.; Altshuler, David; Gabriel, Stacey; DePristo, Mark A.

    2013-01-01

    This unit describes how to use BWA and the Genome Analysis Toolkit (GATK) to map genome sequencing data to a reference and produce high-quality variant calls that can be used in downstream analyses. The complete workflow includes the core NGS data processing steps that are necessary to make the raw data suitable for analysis by the GATK, as well as the key methods involved in variant discovery using the GATK. PMID:25431634

  14. Pre-Hardware Optimization and Implementation Of Fast Optics Closed Control Loop Algorithms

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Lyon, Richard G.; Herman, Jay R.; Abuhassan, Nader

    2004-01-01

    One of the main heritage tools used in scientific and engineering data spectrum analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). The FFT is particularly useful in two-dimensional (2-D) image processing (FFT2) within optical systems control. However, timing constraints of a fast optics closed control loop would require a supercomputer to run the software implementation of the FFT2 and its inverse, as well as other image processing representative algorithm, such as numerical image folding and fringe feature extraction. A laboratory supercomputer is not always available even for ground operations and is not feasible for a night project. However, the computationally intensive algorithms still warrant alternative implementation using reconfigurable computing technologies (RC) such as Digital Signal Processors (DSP) and Field Programmable Gate Arrays (FPGA), which provide low cost compact super-computing capabilities. We present a new RC hardware implementation and utilization architecture that significantly reduces the computational complexity of a few basic image-processing algorithm, such as FFT2, image folding and phase diversity for the NASA Solar Viewing Interferometer Prototype (SVIP) using a cluster of DSPs and FPGAs. The DSP cluster utilization architecture also assures avoidance of a single point of failure, while using commercially available hardware. This, combined with the control algorithms pre-hardware optimization, or the first time allows construction of image-based 800 Hertz (Hz) optics closed control loops on-board a spacecraft, based on the SVIP ground instrument. That spacecraft is the proposed Earth Atmosphere Solar Occultation Imager (EASI) to study greenhouse gases CO2, C2H, H2O, O3, O2, N2O from Lagrange-2 point in space. This paper provides an advanced insight into a new type of science capabilities for future space exploration missions based on on-board image processing

  15. Topology correction of segmented medical images using a fast marching algorithm.

    PubMed

    Bazin, Pierre-Louis; Pham, Dzung L

    2007-11-01

    We present here a new method for correcting the topology of objects segmented from medical images. Whereas previous techniques alter a surface obtained from a binary segmentation of the object, our technique can be applied directly to the image intensities of a probabilistic or fuzzy segmentation, thereby propagating the topology for all isosurfaces of the object. From an analysis of topological changes and critical points in implicit surfaces, we derive a topology propagation algorithm that enforces any desired topology using a fast marching technique. The method has been applied successfully to the correction of the cortical gray matter/white matter interface in segmented brain images and is publicly released as a software plug-in for the MIPAV package. PMID:17942182

  16. Program for the analysis of time series. [by means of fast Fourier transform algorithm

    NASA Technical Reports Server (NTRS)

    Brown, T. J.; Brown, C. G.; Hardin, J. C.

    1974-01-01

    A digital computer program for the Fourier analysis of discrete time data is described. The program was designed to handle multiple channels of digitized data on general purpose computer systems. It is written, primarily, in a version of FORTRAN 2 currently in use on CDC 6000 series computers. Some small portions are written in CDC COMPASS, an assembler level code. However, functional descriptions of these portions are provided so that the program may be adapted for use on any facility possessing a FORTRAN compiler and random-access capability. Properly formatted digital data are windowed and analyzed by means of a fast Fourier transform algorithm to generate the following functions: (1) auto and/or cross power spectra, (2) autocorrelations and/or cross correlations, (3) Fourier coefficients, (4) coherence functions, (5) transfer functions, and (6) histograms.

  17. On high-order denoising models and fast algorithms for vector-valued images.

    PubMed

    Brito-Loeza, Carlos; Chen, Ke

    2010-06-01

    Variational techniques for gray-scale image denoising have been deeply investigated for many years; however, little research has been done for the vector-valued denoising case and the very few existent works are all based on total-variation regularization. It is known that total-variation models for denoising gray-scaled images suffer from staircasing effect and there is no reason to suggest this effect is not transported into the vector-valued models. High-order models, on the contrary, do not present staircasing. In this paper, we introduce three high-order and curvature-based denoising models for vector-valued images. Their properties are analyzed and a fast multigrid algorithm for the numerical solution is provided. AMS subject classifications: 68U10, 65F10, 65K10. PMID:20172828

  18. Fast String Search on Multicore Processors: Mapping fundamental algorithms onto parallel hardware

    SciTech Connect

    Scarpazza, Daniele P.; Villa, Oreste; Petrini, Fabrizio

    2008-04-01

    String searching is one of these basic algorithms. It has a host of applications, including search engines, network intrusion detection, virus scanners, spam filters, and DNA analysis, among others. The Cell processor, with its multiple cores, promises to speed-up string searching a lot. In this article, we show how we mapped string searching efficiently on the Cell. We present two implementations: • The fast implementation supports a small dictionary size (approximately 100 patterns) and provides a throughput of 40 Gbps, which is 100 times faster than reference implementations on x86 architectures. • The heavy-duty implementation is slower (3.3-4.3 Gbps), but supports dictionaries with tens of thousands of strings.

  19. Automatic brain tumor segmentation with a fast Mumford-Shah algorithm

    NASA Astrophysics Data System (ADS)

    Müller, Sabine; Weickert, Joachim; Graf, Norbert

    2016-03-01

    We propose a fully-automatic method for brain tumor segmentation that does not require any training phase. Our approach is based on a sequence of segmentations using the Mumford-Shah cartoon model with varying parameters. In order to come up with a very fast implementation, we extend the recent primal-dual algorithm of Strekalovskiy et al. (2014) from the 2D to the medically relevant 3D setting. Moreover, we suggest a new confidence refinement and show that it can increase the precision of our segmentations substantially. Our method is evaluated on 188 data sets with high-grade gliomas and 25 with low-grade gliomas from the BraTS14 database. Within a computation time of only three minutes, we achieve Dice scores that are comparable to state-of-the-art methods.

  20. A segmentation algorithm for automated tracking of fast swimming unlabelled cells in three dimensions.

    PubMed

    Pimentel, J A; Carneiro, J; Darszon, A; Corkidi, G

    2012-01-01

    Recent advances in microscopy and cytolabelling methods enable the real time imaging of cells as they move and interact in their real physiological environment. Scenarios in which multiple cells move autonomously in all directions are not uncommon in biology. A remarkable example is the swimming of marine spermatozoa in search of the conspecific oocyte. Imaging cells in these scenarios, particularly when they move fast and are poorly labelled or even unlabelled requires very fast three-dimensional time-lapse (3D+t) imaging. This 3D+t imaging poses challenges not only to the acquisition systems but also to the image analysis algorithms. It is in this context that this work describes an original automated multiparticle segmentation method to analyse motile translucent cells in 3D microscopical volumes. The proposed segmentation technique takes advantage of the way the cell appearance changes with the distance to the focal plane position. The cells translucent properties and their interaction with light produce a specific pattern: when the cell is within or close to the focal plane, its two-dimensional (2D) appearance matches a bright spot surrounded by a dark ring, whereas when it is farther from the focal plane the cell contrast is inverted looking like a dark spot surrounded by a bright ring. The proposed method analyses the acquired video sequence frame-by-frame taking advantage of 2D image segmentation algorithms to identify and select candidate cellular sections. The crux of the method is in the sequential filtering of the candidate sections, first by template matching of the in-focus and out-of-focus templates and second by considering adjacent candidates sections in 3D. These sequential filters effectively narrow down the number of segmented candidate sections making the automatic tracking of cells in three dimensions a straightforward operation. PMID:21999166

  1. Hybrid-dual-fourier tomographic algorithm for a fast three-dimensionial optical image reconstruction in turbid media

    NASA Technical Reports Server (NTRS)

    Alfano, Robert R. (Inventor); Cai, Wei (Inventor)

    2007-01-01

    A reconstruction technique for reducing computation burden in the 3D image processes, wherein the reconstruction procedure comprises an inverse and a forward model. The inverse model uses a hybrid dual Fourier algorithm that combines a 2D Fourier inversion with a 1D matrix inversion to thereby provide high-speed inverse computations. The inverse algorithm uses a hybrid transfer to provide fast Fourier inversion for data of multiple sources and multiple detectors. The forward model is based on an analytical cumulant solution of a radiative transfer equation. The accurate analytical form of the solution to the radiative transfer equation provides an efficient formalism for fast computation of the forward model.

  2. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    PubMed

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. PMID:23871964

  3. Interlocked optimization and fast gradient algorithm for a seismic inverse problem

    SciTech Connect

    Metivier, Ludovic

    2011-08-10

    Highlights: {yields} A 2D extension of the 1D nonlinear inversion of well-seismic data is given. {yields} Appropriate regularization yields a well-determined large scale inverse problem. {yields} An interlocked optimization loop acts as an efficient preconditioner. {yields} The adjoint state method is used to compute the misfit function gradient. {yields} Domain decomposition method yields an efficient parallel implementation. - Abstract: We give a nonlinear inverse method for seismic data recorded in a well from sources at several offsets from the borehole in a 2D acoustic framework. Given the velocity field, approximate values of the impedance are recovered. This is a 2D extension of the 1D inversion of vertical seismic profiles . The inverse problem generates a large scale undetermined ill-conditioned problem. Appropriate regularization terms render the problem well-determined. An interlocked optimization algorithm yields an efficient preconditioning. A gradient algorithm based on the adjoint state method and domain decomposition gives a fast parallel numerical method. For a realistic test case, convergence is attained in an acceptable time with 128 processors.

  4. A fast thresholded landweber algorithm for wavelet-regularized multidimensional deconvolution.

    PubMed

    Vonesch, C; Unser, M

    2008-04-01

    We present a fast variational deconvolution algorithm that minimizes a quadratic data term subject to a regularization on the l(1)-norm of the wavelet coefficients of the solution. Previously available methods have essentially consisted in alternating between a Landweber iteration and a wavelet-domain soft-thresholding operation. While having the advantage of simplicity, they are known to converge slowly. By expressing the cost functional in a Shannon wavelet basis, we are able to decompose the problem into a series of subband-dependent minimizations. In particular, this allows for larger (subband-dependent) step sizes and threshold levels than the previous method. This improves the convergence properties of the algorithm significantly. We demonstrate a speed-up of one order of magnitude in practical situations. This makes wavelet-regularized deconvolution more widely accessible, even for applications with a strong limitation on computational complexity. We present promising results in 3-D deconvolution microscopy, where the size of typical data sets does not permit more than a few tens of iterations. PMID:18390362

  5. A fast algorithm for voxel-based deterministic simulation of X-ray imaging

    NASA Astrophysics Data System (ADS)

    Li, Ning; Zhao, Hua-Xia; Cho, Sang-Hyun; Choi, Jung-Gil; Kim, Myoung-Hee

    2008-04-01

    Deterministic method based on ray tracing technique is known as a powerful alternative to the Monte Carlo approach for virtual X-ray imaging. The algorithm speed is a critical issue in the perspective of simulating hundreds of images, notably to simulate tomographic acquisition or even more, to simulate X-ray radiographic video recordings. We present an algorithm for voxel-based deterministic simulation of X-ray imaging using voxel-driven forward and backward perspective projection operations and minimum bounding rectangles (MBRs). The algorithm is fast, easy to implement, and creates high-quality simulated radiographs. As a result, simulated radiographs can typically be obtained in split seconds with a simple personal computer. Program summaryProgram title: X-ray Catalogue identifier: AEAD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 416 257 No. of bytes in distributed program, including test data, etc.: 6 018 263 Distribution format: tar.gz Programming language: C (Visual C++) Computer: Any PC. Tested on DELL Precision 380 based on a Pentium D 3.20 GHz processor with 3.50 GB of RAM Operating system: Windows XP Classification: 14, 21.1 Nature of problem: Radiographic simulation of voxelized objects based on ray tracing technique. Solution method: The core of the simulation is a fast routine for the calculation of ray-box intersections and minimum bounding rectangles, together with voxel-driven forward and backward perspective projection operations. Restrictions: Memory constraints. There are three programs in all. A. Program for test 3.1(1): Object and detector have axis-aligned orientation; B. Program for test 3.1(2): Object in arbitrary orientation; C. Program for test 3.2: Simulation of X-ray video

  6. Fast multi-scale edge detection algorithm based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Zang, Jie; Song, Yanjun; Li, Shaojuan; Luo, Guoyun

    2011-11-01

    The traditional edge detection algorithms have certain noise amplificat ion, making there is a big error, so the edge detection ability is limited. In analysis of the low-frequency signal of image, wavelet analysis theory can reduce the time resolution; under high time resolution for high-frequency signal of the image, it can be concerned about the transient characteristics of the signal to reduce the frequency resolution. Because of the self-adaptive for signal, the wavelet transform can ext ract useful informat ion from the edge of an image. The wavelet transform is at various scales, wavelet transform of each scale provides certain edge informat ion, so called mult i-scale edge detection. Multi-scale edge detection is that the original signal is first polished at different scales, and then detects the mutation of the original signal by the first or second derivative of the polished signal, and the mutations are edges. The edge detection is equivalent to signal detection in different frequency bands after wavelet decomposition. This article is use of this algorithm which takes into account both details and profile of image to detect the mutation of the signal at different scales, provided necessary edge information for image analysis, target recognition and machine visual, and achieved good results.

  7. A fast algorithm for the recursive calculation of dominant singular subspaces

    NASA Astrophysics Data System (ADS)

    Mastronardi, N.; van Barel, M.; Vandebril, R.

    2008-09-01

    In many engineering applications it is required to compute the dominant subspace of a matrix A of dimension m×n, with m[not double greater-than sign]n. Often the matrix A is produced incrementally, so all the columns are not available simultaneously. This problem arises, e.g., in image processing, where each column of the matrix A represents an image of a given sequence leading to a singular value decomposition-based compression [S. Chandrasekaran, B.S. Manjunath, Y.F. Wang, J. Winkeler, H. Zhang, An eigenspace update algorithm for image analysis, Graphical Models and Image Process. 59 (5) (1997) 321-332]. Furthermore, the so-called proper orthogonal decomposition approximation uses the left dominant subspace of a matrix A where a column consists of a time instance of the solution of an evolution equation, e.g., the flow field from a fluid dynamics simulation. Since these flow fields tend to be very large, only a small number can be stored efficiently during the simulation, and therefore an incremental approach is useful [P. Van Dooren, Gramian based model reduction of large-scale dynamical systems, in: Numerical Analysis 1999, Chapman & Hall, CRC Press, London, Boca Raton, FL, 2000, pp. 231-247]. In this paper an algorithm for computing an approximation of the left dominant subspace of size k of , with k[double less-than sign]m,n, is proposed requiring at each iteration O(mk+k2) floating point operations. Moreover, the proposed algorithm exhibits a lot of parallelism that can be exploited for a suitable implementation on a parallel computer.

  8. A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    NASA Technical Reports Server (NTRS)

    Hung, Stephen H. Y.

    1989-01-01

    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further.

  9. Fundamental analysis and algorithms for development of a mobile fast-scan lateral migration radiography system

    NASA Astrophysics Data System (ADS)

    Su, Zhong

    Lateral migration radiography (LMR) is a unique x-ray Compton backscatter imaging (CBI) technique to image surface and subsurface, or internal structure of an object. An x-ray pencil beam scans the interrogated area and the backscattered photons are registered by detectors which have varying degrees of collimation. In early LMR applications, either the LMR systems or the imaged objects are moved on a rectangular grid, and at each node, the systems register backscattered photon energy deposition as pixel intensity in acquired images. The mechanical movement of the system or objects from pixel to pixel causes prolonged image scan time with a high percentage of system dead time. To avoid this drawback, a particular x-ray beam formation technique is proposed and analyzed. A corresponding mobile, fast-scan LMR system is designed, fabricated and tested. The results show a two orders-of-magnitude reduction in image scan time compared with those of previous systems. The x-ray beam formation technique, based on a rotating collimator in the LMR system, implements surface line scan by sampling an x-ray fan beam. This rotating collimator yields unique imaging effects compared to those for an x-ray beam with fixed collimation and perpendicular incidence: (1) the speed of the x-ray beam spot on the scanned surface is not uniform; (2) constant movement of the x-ray beam spot changes the resolution in the image raster direction; (3) x-ray beam spot size changes with location on the scanned surface; (4) the object image shows a squeezed effect in the raster scan direction; (5) under a uniform background, the Compton scatter angular distribution causes the x-ray backscatter field to be stronger, when the x-ray beam has greater incidence angle; and (6) the x-ray illumination spot trace on the scanned surface is skewed. The physics generating these effects is analyzed with Monte Carlo computer simulations and/or measurements. Image acquisition and image processing algorithms are

  10. The 183-WSL fast rain rate retrieval algorithm: Part I: Retrieval design

    NASA Astrophysics Data System (ADS)

    Laviola, Sante; Levizzani, Vincenzo

    2011-03-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) fast retrieval method retrieves rain rates and classifies precipitation types for applications in nowcasting and weather monitoring. The retrieval scheme consists of two fast algorithms, over land and over ocean, that use the water vapour absorption lines at 183.31 GHz corresponding to the channels 3 (183.31 ± 1 GHz), 4 (183.31 ± 3 GHz) and 5 (183.31 ± 7 GHz) of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and Metop-A satellite series, respectively. The method retrieves rain rates by exploiting the extinction of radiation due to rain drops following four subsequent steps. After ingesting the satellite data stream, the window channels at 89 and 150 GHz are used to compute scattering-based thresholds and the 183-WSLW module for rainfall area discrimination and precipitation type classification as stratiform or convective on the basis of the thresholds calculated for land/mixed and sea surfaces. The thresholds are based on the brightness temperature difference Δwin = TB89 - TB150 and are different over land (L) and over sea (S): cloud droplets and water vapour (Δwin < 3 K L; Δwin < 0 K S), stratiform rain (3 K < Δwin < 10 K L; 0 K < Δwin < 10 K S), and convective rain (Δwin > 10 K L and S). The thresholds, initially empirically derived from observations, are corroborated by the simulations of the RTTOV radiative transfer model applied to 20000 ECMWF atmospheric profiles at midlatitudes and the use of data from the Nimrod radar network. A snow cover mask and a digital elevation model are used to eliminate false rain area attribution, especially over elevated terrain. A probability of detection logistic function is also applied in the transition region from no-rain to rain adjacent to the clouds to ensure continuity of the rainfall field. Finally, the last step is dedicated to the rain rate retrieval with the modules 183-WSLS (stratiform

  11. A fast automatic recognition and location algorithm for fetal genital organs in ultrasound images

    PubMed Central

    Tang, Sheng; Chen, Si-ping

    2009-01-01

    Severe sex ratio imbalance at birth is now becoming an important issue in several Asian countries. Its leading immediate cause is prenatal sex-selective abortion following illegal sex identification by ultrasound scanning. In this paper, a fast automatic recognition and location algorithm for fetal genital organs is proposed as an effective method to help prevent ultrasound technicians from unethically and illegally identifying the sex of the fetus. This automatic recognition algorithm can be divided into two stages. In the ‘rough’ stage, a few pixels in the image, which are likely to represent the genital organs, are automatically chosen as points of interest (POIs) according to certain salient characteristics of fetal genital organs. In the ‘fine’ stage, a specifically supervised learning framework, which fuses an effective feature data preprocessing mechanism into the multiple classifier architecture, is applied to every POI. The basic classifiers in the framework are selected from three widely used classifiers: radial basis function network, backpropagation network, and support vector machine. The classification results of all the POIs are then synthesized to determine whether the fetal genital organ is present in the image, and to locate the genital organ within the positive image. Experiments were designed and carried out based on an image dataset comprising 658 positive images (images with fetal genital organs) and 500 negative images (images without fetal genital organs). The experimental results showed true positive (TP) and true negative (TN) results from 80.5% (265 from 329) and 83.0% (415 from 500) of samples, respectively. The average computation time was 453 ms per image. PMID:19735097

  12. An algorithm for computing the 2D structure of fast rotating stars

    NASA Astrophysics Data System (ADS)

    Rieutord, Michel; Espinosa Lara, Francisco; Putigny, Bertrand

    2016-08-01

    Stars may be understood as self-gravitating masses of a compressible fluid whose radiative cooling is compensated by nuclear reactions or gravitational contraction. The understanding of their time evolution requires the use of detailed models that account for a complex microphysics including that of opacities, equation of state and nuclear reactions. The present stellar models are essentially one-dimensional, namely spherically symmetric. However, the interpretation of recent data like the surface abundances of elements or the distribution of internal rotation have reached the limits of validity of one-dimensional models because of their very simplified representation of large-scale fluid flows. In this article, we describe the ESTER code, which is the first code able to compute in a consistent way a two-dimensional model of a fast rotating star including its large-scale flows. Compared to classical 1D stellar evolution codes, many numerical innovations have been introduced to deal with this complex problem. First, the spectral discretization based on spherical harmonics and Chebyshev polynomials is used to represent the 2D axisymmetric fields. A nonlinear mapping maps the spheroidal star and allows a smooth spectral representation of the fields. The properties of Picard and Newton iterations for solving the nonlinear partial differential equations of the problem are discussed. It turns out that the Picard scheme is efficient on the computation of the simple polytropic stars, but Newton algorithm is unsurpassed when stellar models include complex microphysics. Finally, we discuss the numerical efficiency of our solver of Newton iterations. This linear solver combines the iterative Conjugate Gradient Squared algorithm together with an LU-factorization serving as a preconditioner of the Jacobian matrix.

  13. Validation of the Pinnacle³ photon convolution-superposition algorithm applied to fast neutron beams.

    PubMed

    Kalet, Alan M; Sandison, George A; Phillips, Mark H; Parvathaneni, Upendra

    2013-01-01

    We evaluate a photon convolution-superposition algorithm used to model a fast neutron therapy beam in a commercial treatment planning system (TPS). The neutron beam modeled was the Clinical Neutron Therapy System (CNTS) fast neutron beam produced by 50 MeV protons on a Be target at our facility, and we implemented the Pinnacle3 dose calculation model for computing neutron doses. Measured neutron data were acquired by an IC30 ion chamber flowing 5 cc/min of tissue equivalent gas. Output factors and profile scans for open and wedged fields were measured according to the Pinnacle physics reference guide recommendations for photon beams in a Wellhofer water tank scanning system. Following the construction of a neutron beam model, computed doses were then generated using 100 monitor units (MUs) beams incident on a water-equivalent phantom for open and wedged square fields, as well as multileaf collimator (MLC)-shaped irregular fields. We compared Pinnacle dose profiles, central axis doses, and off-axis doses (in irregular fields) with 1) doses computed using the Prism treatment planning system, and 2) doses measured in a water phantom and having matching geometry to the computation setup. We found that the Pinnacle photon model may be used to model most of the important dosimetric features of the CNTS fast neutron beam. Pinnacle-calculated dose points among open and wedged square fields exhibit dose differences within 3.9 cGy of both Prism and measured doses along the central axis, and within 5 cGy difference of measurement in the penumbra region. Pinnacle dose point calculations using irregular treatment type fields showed a dose difference up to 9 cGy from measured dose points, although most points of comparison were below 5 cGy. Comparisons of dose points that were chosen from cases planned in both Pinnacle and Prism show an average dose difference less than 0.6%, except in certain fields which incorporate both wedges and heavy blocking of the central axis. All

  14. A Fast, Locally Adaptive, Interactive Retrieval Algorithm for the Analysis of DIAL Measurements

    NASA Astrophysics Data System (ADS)

    Samarov, D. V.; Rogers, R.; Hair, J. W.; Douglass, K. O.; Plusquellic, D.

    2010-12-01

    Differential absorption light detection and ranging (DIAL) is a laser-based tool which is used for remote, range-resolved measurement of particular gases in the atmosphere, such as carbon-dioxide and methane. In many instances it is of interest to study how these gases are distributed over a region such as a landfill, factory, or farm. While a single DIAL measurement only tells us about the distribution of a gas along a single path, a sequence of consecutive measurements provides us with information on how that gas is distributed over a region, making DIAL a natural choice for such studies. DIAL measurements present a number of interesting challenges; first, in order to convert the raw data to concentration it is necessary to estimate the derivative along the path of the measurement. Second, as the distribution of gases across a region can be highly heterogeneous it is important that the spatial nature of the measurements be taken into account. Finally, since it is common for the set of collected measurements to be quite large it is important for the method to be computationally efficient. Existing work based on Local Polynomial Regression (LPR) has been developed which addresses the first two issues, but the issue of computational speed remains an open problem. In addition to the latter, another desirable property is to allow user input into the algorithm. In this talk we present a novel method based on LPR which utilizes a variant of the RODEO algorithm to provide a fast, locally adaptive and interactive approach to the analysis of DIAL measurements. This methodology is motivated by and applied to several simulated examples and a study out of NASA Langley Research Center (LaRC) looking at the estimation of aerosol extinction in the atmosphere. A comparison study of our method against several other algorithms is also presented. References Chaudhuri, P., Marron, J.S., Scale-space view of curve estimation, Annals of Statistics 28 (2000) 408-428. Duong, T., Cowling

  15. Fast numerical algorithms for fitting multiresolution hybrid shape models to brain MRI.

    PubMed

    Vemuri, B C; Guo, Y; Lai, S H; Leonard, C M

    1997-09-01

    In this paper, we present new and fast numerical algorithms for shape recovery from brain MRI using multiresolution hybrid shape models. In this modeling framework, shapes are represented by a core rigid shape characterized by a superquadric function and a superimposed displacement function which is characterized by a membrane spline discretized using the finite-element method. Fitting the model to brain MRI data is cast as an energy minimization problem which is solved numerically. We present three new computational methods for model fitting to data. These methods involve novel mathematical derivations that lead to efficient numerical solutions of the model fitting problem. The first method involves using the nonlinear conjugate gradient technique with a diagonal Hessian preconditioner. The second method involves the nonlinear conjugate gradient in the outer loop for solving global parameters of the model and a preconditioned conjugate gradient scheme for solving the local parameters of the model. The third method involves the nonlinear conjugate gradient in the outer loop for solving the global parameters and a combination of the Schur complement formula and the alternating direction-implicit method for solving the local parameters of the model. We demonstrate the efficiency of our model fitting methods via experiments on several MR brain scans. PMID:9873915

  16. Inverse scattering solutions by a sinc basis, multiple source, moment method--Part III: Fast algorithms.

    PubMed

    Johnson, S A; Zhou, Y; Tracy, M K; Berggren, M J; Stenger, F

    1984-01-01

    olving the inverse scattering problem for the Helmholtz wave equation without employing the Born or Rytov approximations is a challenging problem, but some slow iterative methods have been proposed. One such method suggested by us is based on solving systems of nonlinear algebraic equations that are derived by applying the method of moments to a sinc basis function expansion of the fields and scattering potential. In the past, we have solved these equations for a 2-D object of n by n pixels in a time proportional to n5. In the present paper, we demonstrate a new method based on FFT convolution and the concept of backprojection which solves these equations in time proportional to n3 X log(n). Several numerical examples are given for images up to 7 by 7 pixels in size. Analogous algorithms to solve the Riccati wave equation in n3 X log(n) time are also suggested, but not verified. A method is suggested for interpolating measurements from one detector geometry to a new perturbed detector geometry whose measurement points fall on a FFT accessible, rectangular grid and thereby render many detector geometrics compatible for use by our fast methods. PMID:6540908

  17. Fast Estimation of Distribution Algorithm (EDA) via Constrained Multi-Parent Recombination

    NASA Astrophysics Data System (ADS)

    Chan, Zeke S. H.; Kasabov, N.

    This paper proposes a new evolutionary operator called Constrained Multi-parent Recombination (CMR) that performs Estimation of Distribution Algorithm (EDA) for continuous optimization problems without evaluating any explicit probabilistic model. The operator linearly combines subsets of the parent population with random coefficients that are subject to constraints to produce the offspring population, so that it is distributed according to Normal distribution with mean and variance equal to that of the parent population. Moreover, the population convergence rate can be controlled with a variance-scaling factor. CMR is a simple, yet robust and efficient operator. It eliminates the requirement for evaluating an explicit probabilistic model and thus the associated errors and computation. It implicitly models the full set of d(d-1)/2 interdependencies between components, yet its computation complexity is only O(d) per solution (d denotes the problem dimension). Theoretical proofs are provided to support its underlying principle. Preliminary experiment involves comparing the performance of CMR, four other EDA approaches and Evolutionary Strategies over three benchmark test functions. Results show that CMR performs more consistently than other approaches.

  18. A Fast EM Algorithm for Fitting Joint Models of a Binary Response and Multiple Longitudinal Covariates Subject to Detection Limits

    PubMed Central

    Bernhardt, Paul W.; Zhang, Daowen; Wang, Huixia Judy

    2014-01-01

    Joint modeling techniques have become a popular strategy for studying the association between a response and one or more longitudinal covariates. Motivated by the GenIMS study, where it is of interest to model the event of survival using censored longitudinal biomarkers, a joint model is proposed for describing the relationship between a binary outcome and multiple longitudinal covariates subject to detection limits. A fast, approximate EM algorithm is developed that reduces the dimension of integration in the E-step of the algorithm to one, regardless of the number of random effects in the joint model. Numerical studies demonstrate that the proposed approximate EM algorithm leads to satisfactory parameter and variance estimates in situations with and without censoring on the longitudinal covariates. The approximate EM algorithm is applied to analyze the GenIMS data set. PMID:25598564

  19. A fast color image enhancement algorithm based on Max Intensity Channel.

    PubMed

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-03-30

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details. PMID:25110395

  20. On the applicability of genetic algorithms to fast solar spectropolarimetric inversions for vector magnetography

    NASA Astrophysics Data System (ADS)

    Harker, Brian J.

    The measurement of vector magnetic fields on the sun is one of the most important diagnostic tools for characterizing solar activity. The ubiquitous solar wind is guided into interplanetary space by open magnetic field lines in the upper solar atmosphere. Highly-energetic solar flares and Coronal Mass Ejections (CMEs) are triggered in lower layers of the solar atmosphere by the driving forces at the visible "surface" of the sun, the photosphere. The driving forces there tangle and interweave the vector magnetic fields, ultimately leading to an unstable field topology with large excess magnetic energy, and this excess energy is suddenly and violently released by magnetic reconnection, emitting intense broadband radiation that spans the electromagnetic spectrum, accelerating billions of metric tons of plasma away from the sun, and finally relaxing the magnetic field to lower-energy states. These eruptive flaring events can have severe impacts on the near-Earth environment and the human technology that inhabits it. This dissertation presents a novel inversion method for inferring the properties of the vector magnetic field from telescopic measurements of the polarization states (Stokes vector) of the light received from the sun, in an effort to develop a method that is fast, accurate, and reliable. One of the long-term goals of this work is to develop such a method that is capable of rapidly-producing characterizations of the magnetic field from time-sequential data, such that near real-time projections of the complexity and flare- productivity of solar active regions can be made. This will be a boon to the field of solar flare forecasting, and should help mitigate the harmful effects of space weather on mankind's space-based endeavors. To this end, I have developed an inversion method based on genetic algorithms (GA) that have the potential for achieving such high-speed analysis.

  1. CATHEDRAL: A Fast and Effective Algorithm to Predict Folds and Domain Boundaries from Multidomain Protein Structures

    PubMed Central

    Dallman, Tim; Pearl, Frances M. G; Orengo, Christine A

    2007-01-01

    We present CATHEDRAL, an iterative protocol for determining the location of previously observed protein folds in novel multidomain protein structures. CATHEDRAL builds on the features of a fast secondary-structure–based method (using graph theory) to locate known folds within a multidomain context and a residue-based, double-dynamic programming algorithm, which is used to align members of the target fold groups against the query protein structure to identify the closest relative and assign domain boundaries. To increase the fidelity of the assignments, a support vector machine is used to provide an optimal scoring scheme. Once a domain is verified, it is excised, and the search protocol is repeated in an iterative fashion until all recognisable domains have been identified. We have performed an initial benchmark of CATHEDRAL against other publicly available structure comparison methods using a consensus dataset of domains derived from the CATH and SCOP domain classifications. CATHEDRAL shows superior performance in fold recognition and alignment accuracy when compared with many equivalent methods. If a novel multidomain structure contains a known fold, CATHEDRAL will locate it in 90% of cases, with <1% false positives. For nearly 80% of assigned domains in a manually validated test set, the boundaries were correctly delineated within a tolerance of ten residues. For the remaining cases, previously classified domains were very remotely related to the query chain so that embellishments to the core of the fold caused significant differences in domain sizes and manual refinement of the boundaries was necessary. To put this performance in context, a well-established sequence method based on hidden Markov models was only able to detect 65% of domains, with 33% of the subsequent boundaries assigned within ten residues. Since, on average, 50% of newly determined protein structures contain more than one domain unit, and typically 90% or more of these domains are already

  2. A fast color image enhancement algorithm based on Max Intensity Channel

    PubMed Central

    Sun, Wei; Han, Long; Guo, Baolong; Jia, Wenyan; Sun, Mingui

    2014-01-01

    In this paper, we extend image enhancement techniques based on the retinex theory imitating human visual perception of scenes containing high illumination variations. This extension achieves simultaneous dynamic range modification, color consistency, and lightness rendition without multi-scale Gaussian filtering which has a certain halo effect. The reflection component is analyzed based on the illumination and reflection imaging model. A new prior named Max Intensity Channel (MIC) is implemented assuming that the reflections of some points in the scene are very high in at least one color channel. Using this prior, the illumination of the scene is obtained directly by performing a gray-scale closing operation and a fast cross-bilateral filtering on the MIC of the input color image. Consequently, the reflection component of each RGB color channel can be determined from the illumination and reflection imaging model. The proposed algorithm estimates the illumination component which is relatively smooth and maintains the edge details in different regions. A satisfactory color rendition is achieved for a class of images that do not satisfy the gray-world assumption implicit to the theoretical foundation of the retinex. Experiments are carried out to compare the new method with several spatial and transform domain methods. Our results indicate that the new method is superior in enhancement applications, improves computation speed, and performs well for images with high illumination variations than other methods. Further comparisons of images from National Aeronautics and Space Administration and a wearable camera eButton have shown a high performance of the new method with better color restoration and preservation of image details. PMID:25110395

  3. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations

    SciTech Connect

    Nukala, Phani K. V. V.; Kent, P. R. C.

    2009-05-28

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N{sup 2}) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N{sup 2}) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN{sup 2}) work and O(MN{sup 2}) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.

  4. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Nukala, Phani K. V. V.; Kent, P. R. C.

    2009-05-01

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N2) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N2) Sherman-Morrison algorithm for up to O(N ) updates. For multideterminant configuration-interaction-type trial wave functions of M +1 determinants, the new algorithm is significantly more efficient, saving both O(MN2) work and O(MN2) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions.

  5. A fast and efficient algorithm for Slater determinant updates in quantum Monte Carlo simulations.

    PubMed

    Nukala, Phani K V V; Kent, P R C

    2009-05-28

    We present an efficient low-rank updating algorithm for updating the trial wave functions used in quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is O(kN) during the kth step compared to traditional algorithms that require O(N(2)) computations, where N is the system size. For single determinant trial wave functions the new algorithm is faster than the traditional O(N(2)) Sherman-Morrison algorithm for up to O(N) updates. For multideterminant configuration-interaction-type trial wave functions of M+1 determinants, the new algorithm is significantly more efficient, saving both O(MN(2)) work and O(MN(2)) storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration-interaction-type wave functions. PMID:19485435

  6. A Fast and efficient Algorithm for Slater Determinant Updates in Quantum Monte Carlo Simulations

    SciTech Connect

    Nukala, Phani K; Kent, Paul R

    2009-01-01

    We present an efficient low-rank updating algorithm for updating the trial wavefunctions used in Quantum Monte Carlo (QMC) simulations. The algorithm is based on low-rank updating of the Slater determinants. In particular, the computational complexity of the algorithm is $\\mathcal{O}(k N)$ during the $k$-th step compared with traditional algorithms that require $\\mathcal{O}(N^2)$ computations, where $N$ is the system size. For single determinant trial wavefunctions the new algorithm is faster than the traditional $\\mathcal{O}(N^2)$ Sherman-Morrison algorithm for up to $\\mathcal{O}(N)$ updates. For multideterminant configuration-interaction type trial wavefunctions of $M+1$ determinants, the new algorithm is significantly more efficient, saving both $\\mathcal{O}(MN^2)$ work and $\\mathcal{O}(MN^2)$ storage. The algorithm enables more accurate and significantly more efficient QMC calculations using configuration interaction type wavefunctions.

  7. Generation of SNP datasets for orangutan population genomics using improved reduced-representation sequencing and direct comparisons of SNP calling algorithms

    PubMed Central

    2014-01-01

    Background High-throughput sequencing has opened up exciting possibilities in population and conservation genetics by enabling the assessment of genetic variation at genome-wide scales. One approach to reduce genome complexity, i.e. investigating only parts of the genome, is reduced-representation library (RRL) sequencing. Like similar approaches, RRL sequencing reduces ascertainment bias due to simultaneous discovery and genotyping of single-nucleotide polymorphisms (SNPs) and does not require reference genomes. Yet, generating such datasets remains challenging due to laboratory and bioinformatical issues. In the laboratory, current protocols require improvements with regards to sequencing homologous fragments to reduce the number of missing genotypes. From the bioinformatical perspective, the reliance of most studies on a single SNP caller disregards the possibility that different algorithms may produce disparate SNP datasets. Results We present an improved RRL (iRRL) protocol that maximizes the generation of homologous DNA sequences, thus achieving improved genotyping-by-sequencing efficiency. Our modifications facilitate generation of single-sample libraries, enabling individual genotype assignments instead of pooled-sample analysis. We sequenced ~1% of the orangutan genome with 41-fold median coverage in 31 wild-born individuals from two populations. SNPs and genotypes were called using three different algorithms. We obtained substantially different SNP datasets depending on the SNP caller. Genotype validations revealed that the Unified Genotyper of the Genome Analysis Toolkit and SAMtools performed significantly better than a caller from CLC Genomics Workbench (CLC). Of all conflicting genotype calls, CLC was only correct in 17% of the cases. Furthermore, conflicting genotypes between two algorithms showed a systematic bias in that one caller almost exclusively assigned heterozygotes, while the other one almost exclusively assigned homozygotes. Conclusions

  8. A novel algorithm for calling mRNA m6A peaks by modeling biological variances in MeRIP-seq data

    PubMed Central

    Cui, Xiaodong; Meng, Jia; Zhang, Shaowu; Chen, Yidong; Huang, Yufei

    2016-01-01

    Motivation: N6-methyl-adenosine (m6A) is the most prevalent mRNA methylation but precise prediction of its mRNA location is important for understanding its function. A recent sequencing technology, known as Methylated RNA Immunoprecipitation Sequencing technology (MeRIP-seq), has been developed for transcriptome-wide profiling of m6A. We previously developed a peak calling algorithm called exomePeak. However, exomePeak over-simplifies data characteristics and ignores the reads’ variances among replicates or reads dependency across a site region. To further improve the performance, new model is needed to address these important issues of MeRIP-seq data. Results: We propose a novel, graphical model-based peak calling method, MeTPeak, for transcriptome-wide detection of m6A sites from MeRIP-seq data. MeTPeak explicitly models read count of an m6A site and introduces a hierarchical layer of Beta variables to capture the variances and a Hidden Markov model to characterize the reads dependency across a site. In addition, we developed a constrained Newton’s method and designed a log-barrier function to compute analytically intractable, positively constrained Beta parameters. We applied our algorithm to simulated and real biological datasets and demonstrated significant improvement in detection performance and robustness over exomePeak. Prediction results on publicly available MeRIP-seq datasets are also validated and shown to be able to recapitulate the known patterns of m6A, further validating the improved performance of MeTPeak. Availability and implementation: The package ‘MeTPeak’ is implemented in R and C ++, and additional details are available at https://github.com/compgenomics/MeTPeak Contact: yufei.huang@utsa.edu or xdchoi@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307641

  9. Fast Numerical Algorithms for 3-D Scattering from PEC and Dielectric Random Rough Surfaces in Microwave Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Lisha

    We present fast and robust numerical algorithms for 3-D scattering from perfectly electrical conducting (PEC) and dielectric random rough surfaces in microwave remote sensing. The Coifman wavelets or Coiflets are employed to implement Galerkin's procedure in the method of moments (MoM). Due to the high-precision one-point quadrature, the Coiflets yield fast evaluations of the most off-diagonal entries, reducing the matrix fill effort from O(N2) to O( N). The orthogonality and Riesz basis of the Coiflets generate well conditioned impedance matrix, with rapid convergence for the conjugate gradient solver. The resulting impedance matrix is further sparsified by the matrix-formed standard fast wavelet transform (SFWT). By properly selecting multiresolution levels of the total transformation matrix, the solution precision can be enhanced while matrix sparsity and memory consumption have not been noticeably sacrificed. The unified fast scattering algorithm for dielectric random rough surfaces can asymptotically reduce to the PEC case when the loss tangent grows extremely large. Numerical results demonstrate that the reduced PEC model does not suffer from ill-posed problems. Compared with previous publications and laboratory measurements, good agreement is observed.

  10. Fast parallel algorithms for graph-theoretic problems: matching, coloring, and partitioning

    SciTech Connect

    Karloff, H.J.

    1985-01-01

    New parallel algorithms are presented to solve graph-theoretic problems of three kinds: matching, coloring, and partitioning. Throughout, superfast algorithms, are sought, those running on a parallel random access machine in time polynomial in the log of the input size (polylog time) and using a polynomial number of processors. Problems solvable with such algorithms are said to be in NC. Those solvable by randomized algorithms obeying the same time and processor bounds are said to be in RNC or LVNC; those in RNC (or Monte Carlo RNC) are solvable by algorithms which, on instances of size n, return a correct answer with probability at least 1-2/sup -n/, and those in LVNC (or Las Vegas RNC), by algorithms that always return either a correct answer or failure, failure being returned at most half the time. Often the algorithms themselves will be said to be in NC, TNC, or LVNC.

  11. Automatic mapping of visual cortex receptive fields: a fast and precise algorithm.

    PubMed

    Fiorani, Mario; Azzi, João C B; Soares, Juliana G M; Gattass, Ricardo

    2014-01-15

    An important issue for neurophysiological studies of the visual system is the definition of the region of the visual field that can modify a neuron's activity (i.e., the neuron's receptive field - RF). Usually a trade-off exists between precision and the time required to map a RF. Manual methods (qualitative) are fast but impose a variable degree of imprecision, while quantitative methods are more precise but usually require more time. We describe a rapid quantitative method for mapping visual RFs that is derived from computerized tomography and named back-projection. This method finds the intersection of responsive regions of the visual field based on spike density functions that are generated over time in response to long bars moving in different directions. An algorithm corrects the response profiles for latencies and allows for the conversion of the time domain into a 2D-space domain. The final product is an RF map that shows the distribution of the neuronal activity in visual-spatial coordinates. In addition to mapping the RF, this method also provides functional properties, such as latency, orientation and direction preference indexes. This method exhibits the following beneficial properties: (a) speed; (b) ease of implementation; (c) precise RF localization; (d) sensitivity (this method can map RFs based on few responses); (e) reliability (this method provides consistent information about RF shapes and sizes, which will allow for comparative studies); (f) comprehensiveness (this method can scan for RFs over an extensive area of the visual field); (g) informativeness (it provides functional quantitative data about the RF); and (h) usefulness (this method can map RFs in regions without direct retinal inputs, such as the cortical representations of the optic disc and of retinal lesions, which should allow for studies of functional connectivity, reorganization and neural plasticity). Furthermore, our method allows for precise mapping of RFs in a 30° by 30

  12. A fast algorithm for attribute reduction based on Trie tree and rough set theory

    NASA Astrophysics Data System (ADS)

    Hu, Feng; Wang, Xiao-yan; Luo, Chuan-jiang

    2013-03-01

    Attribute reduction is an important issue in rough set theory. Many efficient algorithms have been proposed, however, few of them can process huge data sets quickly. In this paper, combining the Trie tree, the algorithms for computing positive region of decision table are proposed. After that, a new algorithm for attribute reduction based on Trie tree is developed, which can be used to process the attribute reduction of large data sets quickly. Experiment results show its high efficiency.

  13. Addendum to 'A new hybrid algorithm for computing a fast discrete Fourier transform'

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Benjauthrit, B.

    1981-01-01

    The reported investigation represents a continuation of a study conducted by Reed and Truong (1979), who proposed a hybrid algorithm for computing the discrete Fourier transform (DFT). The proposed technique employs a Winograd-type algorithm in conjunction with the Mersenne prime-number theoretic transform to perform a DFT. The implementation of the technique involves a considerable number of additions. The new investigation shows an approach which can reduce the number of additions significantly. It is proposed to use Winograd's algorithm for computing the Mersenne prime-number theoretic transform in the transform portion of the hybrid algorithm.

  14. Ultra-fast local-haplotype variant calling using paired-end DNA-sequencing data reveals somatic mosaicism in tumor and normal blood samples

    PubMed Central

    Sengupta, Subhajit; Gulukota, Kamalakar; Zhu, Yitan; Ober, Carole; Naughton, Katherine; Wentworth-Sheilds, William; Ji, Yuan

    2016-01-01

    Somatic mosaicism refers to the existence of somatic mutations in a fraction of somatic cells in a single biological sample. Its importance has mainly been discussed in theory although experimental work has started to emerge linking somatic mosaicism to disease diagnosis. Through novel statistical modeling of paired-end DNA-sequencing data using blood-derived DNA from healthy donors as well as DNA from tumor samples, we present an ultra-fast computational pipeline, LocHap that searches for multiple single nucleotide variants (SNVs) that are scaffolded by the same reads. We refer to scaffolded SNVs as local haplotypes (LH). When an LH exhibits more than two genotypes, we call it a local haplotype variant (LHV). The presence of LHVs is considered evidence of somatic mosaicism because a genetically homogeneous cell population will not harbor LHVs. Applying LocHap to whole-genome and whole-exome sequence data in DNA from normal blood and tumor samples, we find wide-spread LHVs across the genome. Importantly, we find more LHVs in tumor samples than in normal samples, and more in older adults than in younger ones. We confirm the existence of LHVs and somatic mosaicism by validation studies in normal blood samples. LocHap is publicly available at http://www.compgenome.org/lochap. PMID:26420835

  15. A Fast Exact k-Nearest Neighbors Algorithm for High Dimensional Search Using k-Means Clustering and Triangle Inequality

    PubMed Central

    Wang, Xueyi

    2011-01-01

    The k-nearest neighbors (k-NN) algorithm is a widely used machine learning method that finds nearest neighbors of a test object in a feature space. We present a new exact k-NN algorithm called kMkNN (k-Means for k-Nearest Neighbors) that uses the k-means clustering and the triangle inequality to accelerate the searching for nearest neighbors in a high dimensional space. The kMkNN algorithm has two stages. In the buildup stage, instead of using complex tree structures such as metric trees, kd-trees, or ball-tree, kMkNN uses a simple k-means clustering method to preprocess the training dataset. In the searching stage, given a query object, kMkNN finds nearest training objects starting from the nearest cluster to the query object and uses the triangle inequality to reduce the distance calculations. Experiments show that the performance of kMkNN is surprisingly good compared to the traditional k-NN algorithm and tree-based k-NN algorithms such as kd-trees and ball-trees. On a collection of 20 datasets with up to 106 records and 104 dimensions, kMkNN shows a 2-to 80-fold reduction of distance calculations and a 2- to 60-fold speedup over the traditional k-NN algorithm for 16 datasets. Furthermore, kMkNN performs significant better than a kd-tree based k-NN algorithm for all datasets and performs better than a ball-tree based k-NN algorithm for most datasets. The results show that kMkNN is effective for searching nearest neighbors in high dimensional spaces. PMID:22247818

  16. LGH: A Fast and Accurate Algorithm for Single Individual Haplotyping Based on a Two-Locus Linkage Graph.

    PubMed

    Xie, Minzhu; Wang, Jianxin; Chen, Xin

    2015-01-01

    Phased haplotype information is crucial in our complete understanding of differences between individuals at the genetic level. Given a collection of DNA fragments sequenced from a homologous pair of chromosomes, the problem of single individual haplotyping (SIH) aims to reconstruct a pair of haplotypes using a computer algorithm. In this paper, we encode the information of aligned DNA fragments into a two-locus linkage graph and approach the SIH problem by vertex labeling of the graph. In order to find a vertex labeling with the minimum sum of weights of incompatible edges, we develop a fast and accurate heuristic algorithm. It starts with detecting error-tolerant components by an adapted breadth-first search. A proper labeling of vertices is then identified for each component, with which sequencing errors are further corrected and edge weights are adjusted accordingly. After contracting each error-tolerant component into a single vertex, the above procedure is iterated on the resulting condensed linkage graph until error-tolerant components are no longer detected. The algorithm finally outputs a haplotype pair based on the vertex labeling. Extensive experiments on simulated and real data show that our algorithm is more accurate and faster than five existing algorithms for single individual haplotyping. PMID:26671798

  17. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms.

    PubMed

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  18. A High-Speed Vision-Based Sensor for Dynamic Vibration Analysis Using Fast Motion Extraction Algorithms

    PubMed Central

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Changan

    2016-01-01

    The development of image sensor and optics enables the application of vision-based techniques to the non-contact dynamic vibration analysis of large-scale structures. As an emerging technology, a vision-based approach allows for remote measuring and does not bring any additional mass to the measuring object compared with traditional contact measurements. In this study, a high-speed vision-based sensor system is developed to extract structure vibration signals in real time. A fast motion extraction algorithm is required for this system because the maximum sampling frequency of the charge-coupled device (CCD) sensor can reach up to 1000 Hz. Two efficient subpixel level motion extraction algorithms, namely the modified Taylor approximation refinement algorithm and the localization refinement algorithm, are integrated into the proposed vision sensor. Quantitative analysis shows that both of the two modified algorithms are at least five times faster than conventional upsampled cross-correlation approaches and achieve satisfactory error performance. The practicability of the developed sensor is evaluated by an experiment in a laboratory environment and a field test. Experimental results indicate that the developed high-speed vision-based sensor system can extract accurate dynamic structure vibration signals by tracking either artificial targets or natural features. PMID:27110784

  19. Effective Analysis of NGS Metagenomic Data with Ultra-Fast Clustering Algorithms (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Li, Weizhong [San Diego Supercomputer Center

    2013-01-22

    San Diego Supercomputer Center's Weizhong Li on "Effective Analysis of NGS Metagenomic Data with Ultra-fast Clustering Algorithms" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  20. Effective Analysis of NGS Metagenomic Data with Ultra-Fast Clustering Algorithms (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    SciTech Connect

    Li, Weizhong

    2011-10-12

    San Diego Supercomputer Center's Weizhong Li on "Effective Analysis of NGS Metagenomic Data with Ultra-fast Clustering Algorithms" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  1. FPGA design and implementation of a fast pixel purity index algorithm for endmember extraction in hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Valencia, David; Plaza, Antonio; Vega-Rodríguez, Miguel A.; Pérez, Rosa M.

    2005-11-01

    Hyperspectral imagery is a class of image data which is used in many scientific areas, most notably, medical imaging and remote sensing. It is characterized by a wealth of spatial and spectral information. Over the last years, many algorithms have been developed with the purpose of finding "spectral endmembers," which are assumed to be pure signatures in remotely sensed hyperspectral data sets. Such pure signatures can then be used to estimate the abundance or concentration of materials in mixed pixels, thus allowing sub-pixel analysis which is crucial in many remote sensing applications due to current sensor optics and configuration. One of the most popular endmember extraction algorithms has been the pixel purity index (PPI), available from Kodak's Research Systems ENVI software package. This algorithm is very time consuming, a fact that has generally prevented its exploitation in valid response times in a wide range of applications, including environmental monitoring, military applications or hazard and threat assessment/tracking (including wildland fire detection, oil spill mapping and chemical and biological standoff detection). Field programmable gate arrays (FPGAs) are hardware components with millions of gates. Their reprogrammability and high computational power makes them particularly attractive in remote sensing applications which require a response in near real-time. In this paper, we present an FPGA design for implementation of PPI algorithm which takes advantage of a recently developed fast PPI (FPPI) algorithm that relies on software-based optimization. The proposed FPGA design represents our first step toward the development of a new reconfigurable system for fast, onboard analysis of remotely sensed hyperspectral imagery.

  2. Application of two oriented partial differential equation filtering models on speckle fringes with poor quality and their numerically fast algorithms.

    PubMed

    Zhu, Xinjun; Chen, Zhanqing; Tang, Chen; Mi, Qinghua; Yan, Xiusheng

    2013-03-20

    In this paper, we are concerned with denoising in experimentally obtained electronic speckle pattern interferometry (ESPI) speckle fringe patterns with poor quality. We extend the application of two existing oriented partial differential equation (PDE) filters, including the second-order single oriented PDE filter and the double oriented PDE filter, to two experimentally obtained ESPI speckle fringe patterns with very poor quality, and compare them with other efficient filtering methods, including the adaptive weighted filter, the improved nonlinear complex diffusion PDE, and the windowed Fourier transform method. All of the five filters have been illustrated to be efficient denoising methods through previous comparative analyses in published papers. The experimental results have demonstrated that the two oriented PDE models are applicable to low-quality ESPI speckle fringe patterns. Then for solving the main shortcoming of the two oriented PDE models, we develop the numerically fast algorithms based on Gauss-Seidel strategy for the two oriented PDE models. The proposed numerical algorithms are capable of accelerating the convergence greatly, and perform significantly better in terms of computational efficiency. Our numerically fast algorithms are extended automatically to some other PDE filtering models. PMID:23518722

  3. A fast underwater optical image segmentation algorithm based on a histogram weighted fuzzy c-means improved by PSO

    NASA Astrophysics Data System (ADS)

    Wang, Shilong; Xu, Yuru; Pang, Yongjie

    2011-03-01

    The S/N of an underwater image is low and has a fuzzy edge. If using traditional methods to process it directly, the result is not satisfying. Though the traditional fuzzy C-means algorithm can sometimes divide the image into object and background, its time-consuming computation is often an obstacle. The mission of the vision system of an autonomous underwater vehicle (AUV) is to rapidly and exactly deal with the information about the object in a complex environment for the AUV to use the obtained result to execute the next task. So, by using the statistical characteristics of the gray image histogram, a fast and effective fuzzy C-means underwater image segmentation algorithm was presented. With the weighted histogram modifying the fuzzy membership, the above algorithm can not only cut down on a large amount of data processing and storage during the computation process compared with the traditional algorithm, so as to speed up the efficiency of the segmentation, but also improve the quality of underwater image segmentation. Finally, particle swarm optimization (PSO) described by the sine function was introduced to the algorithm mentioned above. It made up for the shortcomings that the FCM algorithm can not get the global optimal solution. Thus, on the one hand, it considers the global impact and achieves the local optimal solution, and on the other hand, further greatly increases the computing speed. Experimental results indicate that the novel algorithm can reach a better segmentation quality and the processing time of each image is reduced. They enhance efficiency and satisfy the requirements of a highly effective, real-time AUV.

  4. Fast algorithm for the solution of large-scale non-negativity constrained least squares problems.

    SciTech Connect

    Van Benthem, Mark Hilary; Keenan, Michael Robert

    2004-06-01

    Algorithms for multivariate image analysis and other large-scale applications of multivariate curve resolution (MCR) typically employ constrained alternating least squares (ALS) procedures in their solution. The solution to a least squares problem under general linear equality and inequality constraints can be reduced to the solution of a non-negativity-constrained least squares (NNLS) problem. Thus the efficiency of the solution to any constrained least square problem rests heavily on the underlying NNLS algorithm. We present a new NNLS solution algorithm that is appropriate to large-scale MCR and other ALS applications. Our new algorithm rearranges the calculations in the standard active set NNLS method on the basis of combinatorial reasoning. This rearrangement serves to reduce substantially the computational burden required for NNLS problems having large numbers of observation vectors.

  5. A fast algorithm for parallel computation of multibody dynamics on MIMD parallel architectures

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory; Bagherzadeh, Nader

    1993-01-01

    In this paper the implementation of a parallel O(LogN) algorithm for computation of rigid multibody dynamics on a Hypercube MIMD parallel architecture is presented. To our knowledge, this is the first algorithm that achieves the time lower bound of O(LogN) by using an optimal number of O(N) processors. However, in addition to its theoretical significance, the algorithm is also highly efficient for practical implementation on commercially available MIMD parallel architectures due to its highly coarse grain size and simple communication and synchronization requirements. We present a multilevel parallel computation strategy for implementation of the algorithm on a Hypercube. This strategy allows the exploitation of parallelism at several computational levels as well as maximum overlapping of computation and communication to increase the performance of parallel computation.

  6. Fast two-dimensional super-resolution image reconstruction algorithm for ultra-high emitter density.

    PubMed

    Huang, Jiaqing; Gumpper, Kristyn; Chi, Yuejie; Sun, Mingzhai; Ma, Jianjie

    2015-07-01

    Single-molecule localization microscopy achieves sub-diffraction-limit resolution by localizing a sparse subset of stochastically activated emitters in each frame. Its temporal resolution is limited by the maximal emitter density that can be handled by the image reconstruction algorithms. Multiple algorithms have been developed to accurately locate the emitters even when they have significant overlaps. Currently, compressive-sensing-based algorithm (CSSTORM) achieves the highest emitter density. However, CSSTORM is extremely computationally expensive, which limits its practical application. Here, we develop a new algorithm (MempSTORM) based on two-dimensional spectrum analysis. With the same localization accuracy and recall rate, MempSTORM is 100 times faster than CSSTORM with ℓ(1)-homotopy. In addition, MempSTORM can be implemented on a GPU for parallelism, which can further increase its computational speed and make it possible for online super-resolution reconstruction of high-density emitters. PMID:26125349

  7. Fast computing global structural balance in signed networks based on memetic algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Yixiang; Du, Haifeng; Gong, Maoguo; Ma, Lijia; Wang, Shanfeng

    2014-12-01

    Structural balance is a large area of study in signed networks, and it is intrinsically a global property of the whole network. Computing global structural balance in signed networks, which has attracted some attention in recent years, is to measure how unbalanced a signed network is and it is a nondeterministic polynomial-time hard problem. Many approaches are developed to compute global balance. However, the results obtained by them are partial and unsatisfactory. In this study, the computation of global structural balance is solved as an optimization problem by using the Memetic Algorithm. The optimization algorithm, named Meme-SB, is proposed to optimize an evaluation function, energy function, which is used to compute a distance to exact balance. Our proposed algorithm combines Genetic Algorithm and a greedy strategy as the local search procedure. Experiments on social and biological networks show the excellent effectiveness and efficiency of the proposed method.

  8. A fast and robust bulk-loading algorithm for indexing very large digital elevation datasets: I. Algorithm

    NASA Astrophysics Data System (ADS)

    Rodríguez, Félix R.; Barrena, Manuel

    2011-07-01

    Digital elevation models (DEMs) constitute a valuable source of data for a number of geoscience-related applications. The Shuttle Radar Topography Mission (SRTM) collected and made available to the public the world's largest DEM (composed of billions of points) until that date. The SRTM DEM is stored on the NASA repository as a well-organized collection of flat files. The retrieval of this stored topographic information about a region of interest involves one selection of a proper list of files, their downloading, data filtering in the desired region, and their processing according to user needs. With the aim to provide an easier and faster access to this data by improving its further analysis and processing, we have indexed the SRTM DEM by means of a spatial indexing based on the kd-tree data structure, called the Q-tree. This paper is the first in a two-part series that describes the method followed to build an index on such huge amounts of data, minimizing the number of insert operations. We demonstrate that our method can build a very efficient space-partitioning index, with good performance in both point and range queries on the spatial data. To the best of our knowledge, this is the only successful spatial indexing proposal in the literature that deals with such a huge volume of data.

  9. Verification Studies for Multi-Fluid Plasma Algorithms with Applications to Fast MHD Physics

    NASA Astrophysics Data System (ADS)

    Becker, Joe; Hakim, Ammar; Loverich, John; Stoltz, Peter

    2011-10-01

    In this paper we present a series of verification studies for finite volume algorithms in Nautilus, a numerical solver for fluid plasmas. Results include a set of typical Euler, Maxwell, MHD and Two-fluid benchmarks. In addition results and algorithms for a set of hyperbolic gauge cleaning schemes that can be applied to the MHD and Two-fluid systems using finite volume type methods will be presented. Finally we move onto applications in field reversed configuration (FRC) plasmas.

  10. A fast random walk algorithm for computing the pulsed-gradient spin-echo signal in multiscale porous media.

    PubMed

    Grebenkov, Denis S

    2011-02-01

    A new method for computing the signal attenuation due to restricted diffusion in a linear magnetic field gradient is proposed. A fast random walk (FRW) algorithm for simulating random trajectories of diffusing spin-bearing particles is combined with gradient encoding. As random moves of a FRW are continuously adapted to local geometrical length scales, the method is efficient for simulating pulsed-gradient spin-echo experiments in hierarchical or multiscale porous media such as concrete, sandstones, sedimentary rocks and, potentially, brain or lungs. PMID:21159532

  11. Accurate estimate of the critical exponent nu for self-avoiding walks via a fast implementation of the pivot algorithm.

    PubMed

    Clisby, Nathan

    2010-02-01

    We introduce a fast implementation of the pivot algorithm for self-avoiding walks, which we use to obtain large samples of walks on the cubic lattice of up to 33x10{6} steps. Consequently the critical exponent nu for three-dimensional self-avoiding walks is determined to great accuracy; the final estimate is nu=0.587 597(7). The method can be adapted to other models of polymers with short-range interactions, on the lattice or in the continuum. PMID:20366773

  12. A fast U-D factorization-based learning algorithm with applications to nonlinear system modeling and identification.

    PubMed

    Zhang, Y; Li, X R

    1999-01-01

    A fast learning algorithm for training multilayer feedforward neural networks (FNN's) by using a fading memory extended Kalman filter (FMEKF) is presented first, along with a technique using a self-adjusting time-varying forgetting factor. Then a U-D factorization-based FMEKF is proposed to further improve the learning rate and accuracy of the FNN. In comparison with the backpropagation (BP) and existing EKF-based learning algorithms, the proposed U-D factorization-based FMEKF algorithm provides much more accurate learning results, using fewer hidden nodes. It has improved convergence rate and numerical stability (robustness). In addition, it is less sensitive to start-up parameters (e.g., initial weights and covariance matrix) and the randomness in the observed data. It also has good generalization ability and needs less training time to achieve a specified learning accuracy. Simulation results in modeling and identification of nonlinear dynamic systems are given to show the effectiveness and efficiency of the proposed algorithm. PMID:18252590

  13. A modified VPPM algorithm of VLC systems suitable for fast dimming environment

    NASA Astrophysics Data System (ADS)

    Lee, Seungwoo; Ahn, Byung-Gu; Ju, MinChul; Park, Youngil

    2016-04-01

    As LED applications with fast dimming appears, it is required that the variable pulse position modulation (VPPM)-based visible light communications (VLC) scheme works in this environment also. With the previous VPPM scheme, however, transmission was made possible only in different dimming levels, not in the transition period. In this work, we propose a novel VPPM scheme to operate even in rapid brightness fluctuation environment. For this purpose, we adopt a stepwise brightness change at the LED and moving average correlation masks to cope with the changing brightness. The implemented VLC testbed demonstrates that the proposed scheme is appropriate for fast dimming environment.

  14. Optimization of ultra-fast interactions using laser pulse temporal shaping controlled by a deterministic algorithm

    NASA Astrophysics Data System (ADS)

    Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Ruiz de la Cruz, A.; Solis, J.

    2014-02-01

    Femtosecond laser pulse temporal shaping techniques have led to important advances in different research fields like photochemistry, laser physics, non-linear optics, biology, or materials processing. This success is partly related to the use of optimal control algorithms. Due to the high dimensionality of the solution and control spaces, evolutionary algorithms are extensively applied and, among them, genetic ones have reached the status of a standard adaptive strategy. Still, their use is normally accompanied by a reduction of the problem complexity by different modalities of parameterization of the spectral phase. Exploiting Rabitz and co-authors' ideas about the topology of quantum landscapes, in this work we analyze the optimization of two different problems under a deterministic approach, using a multiple one-dimensional search (MODS) algorithm. In the first case we explore the determination of the optimal phase mask required for generating arbitrary temporal pulse shapes and compare the performance of the MODS algorithm to the standard iterative Gerchberg-Saxton algorithm. Based on the good performance achieved, the same method has been applied for optimizing two-photon absorption starting from temporally broadened laser pulses, or from laser pulses temporally and spectrally distorted by non-linear absorption in air, obtaining similarly good results which confirm the validity of the deterministic search approach.

  15. Optimization of ultra-fast interactions using laser pulse temporal shaping controlled by a deterministic algorithm

    NASA Astrophysics Data System (ADS)

    Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Ruiz de la Cruz, A.; Solis, J.

    2013-04-01

    Femtosecond laser pulse temporal shaping techniques have led to important advances in different research fields like photochemistry, laser physics, non-linear optics, biology, or materials processing. This success is partly related to the use of optimal control algorithms. Due to the high dimensionality of the solution and control spaces, evolutionary algorithms are extensively applied and, among them, genetic ones have reached the status of a standard adaptive strategy. Still, their use is normally accompanied by a reduction of the problem complexity by different modalities of parameterization of the spectral phase. Exploiting Rabitz and co-authors' ideas about the topology of quantum landscapes, in this work we analyze the optimization of two different problems under a deterministic approach, using a multiple one-dimensional search (MODS) algorithm. In the first case we explore the determination of the optimal phase mask required for generating arbitrary temporal pulse shapes and compare the performance of the MODS algorithm to the standard iterative Gerchberg-Saxton algorithm. Based on the good performance achieved, the same method has been applied for optimizing two-photon absorption starting from temporally broadened laser pulses, or from laser pulses temporally and spectrally distorted by non-linear absorption in air, obtaining similarly good results which confirm the validity of the deterministic search approach.

  16. A fast and Robust Algorithm for general inequality/equality constrained minimum time problems

    SciTech Connect

    Briessen, B.; Sadegh, N.

    1995-12-01

    This paper presents a new algorithm for solving general inequality/equality constrained minimum time problems. The algorithm`s solution time is linear in the number of Runge-Kutta steps and the number of parameters used to discretize the control input history. The method is being applied to a three link redundant robotic arm with torque bounds, joint angle bounds, and a specified tip path. It solves case after case within a graphical user interface in which the user chooses the initial joint angles and the tip path with a mouse. Solve times are from 30 to 120 seconds on a Hewlett Packard workstation. A zero torque history is always used in the initial guess, and the algorithm has never crashed, indicating its robustness. The algorithm solves for a feasible solution for large trajectory execution time t{sub f} and then reduces t{sub f} and then reduces t{sub f} by a small amount and re-solves. The fixed time re- solve uses a new method of finding a near-minimum-2-norm solution to a set of linear equations and inequalities that achieves quadratic convegence to a feasible solution of the full nonlinear problem.

  17. Research on fast algorithm of small UAV navigation in non-linear matrix reductionism method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Fang, Jiancheng; Sheng, Wei; Cao, Juanjuan

    2008-10-01

    The low Reynolds numbers of small UAV will result in unfavorable aerodynamic conditions to support controlled flight. And as operated near ground, the small UAV will be affected seriously by low-frequency interference caused by atmospheric disturbance. Therefore, the GNC system needs high frequency of attitude estimation and control to realize the steady of the UAV. In company with the dimensional of small UAV dwindling away, its GNC system is more and more taken embedded designing technology to reach the purpose of compactness, light weight and low power consumption. At the same time, the operational capability of GNC system also gets limit in a certain extent. Therefore, a kind of high speed navigation algorithm design becomes the imminence demand of GNC system. Aiming at such requirement, a kind of non-linearity matrix reduction approach is adopted in this paper to create a new high speed navigation algorithm which holds the radius of meridian circle and prime vertical circle as constant and linearizes the position matrix calculation formulae of navigation equation. Compared with normal navigation algorithm, this high speed navigation algorithm decreases 17.3% operand. Within small UAV"s mission radius (20km), the accuracy of position error is less than 0.13m. The results of semi-physical experiments and small UAV's auto pilot testing proved that this algorithm can realize high frequency attitude estimation and control. It will avoid low-frequency interference caused by atmospheric disturbance properly.

  18. qPMS7: a fast algorithm for finding (ℓ, d)-motifs in DNA and protein sequences.

    PubMed

    Dinh, Hieu; Rajasekaran, Sanguthevar; Davila, Jaime

    2012-01-01

    Detection of rare events happening in a set of DNA/protein sequences could lead to new biological discoveries. One kind of such rare events is the presence of patterns called motifs in DNA/protein sequences. Finding motifs is a challenging problem since the general version of motif search has been proven to be intractable. Motifs discovery is an important problem in biology. For example, it is useful in the detection of transcription factor binding sites and transcriptional regulatory elements that are very crucial in understanding gene function, human disease, drug design, etc. Many versions of the motif search problem have been proposed in the literature. One such is the (ℓ, d)-motif search (or Planted Motif Search (PMS)). A generalized version of the PMS problem, namely, Quorum Planted Motif Search (qPMS), is shown to accurately model motifs in real data. However, solving the qPMS problem is an extremely difficult task because a special case of it, the PMS Problem, is already NP-hard, which means that any algorithm solving it can be expected to take exponential time in the worse case scenario. In this paper, we propose a novel algorithm named qPMS7 that tackles the qPMS problem on real data as well as challenging instances. Experimental results show that our Algorithm qPMS7 is on an average 5 times faster than the state-of-art algorithm. The executable program of Algorithm qPMS7 is freely available on the web at http://pms.engr.uconn.edu/downloads/qPMS7.zip. Our online motif discovery tools that use Algorithm qPMS7 are freely available at http://pms.engr.uconn.edu or http://motifsearch.com. PMID:22848493

  19. Fast Time and Space Parallel Algorithms for Solution of Parabolic Partial Differential Equations

    NASA Technical Reports Server (NTRS)

    Fijany, Amir

    1993-01-01

    In this paper, fast time- and Space -Parallel agorithms for solution of linear parabolic PDEs are developed. It is shown that the seemingly strictly serial iterations of the time-stepping procedure for solution of the problem can be completed decoupled.

  20. A fast map merging algorithm in the field of multirobot SLAM.

    PubMed

    Liu, Yanli; Fan, Xiaoping; Zhang, Heng

    2013-01-01

    In recent years, the research on single-robot simultaneous localization and mapping (SLAM) has made a great success. However, multirobot SLAM faces many challenging problems, including unknown robot poses, unshared map, and unstable communication. In this paper, a map merging algorithm based on virtual robot motion is proposed for multi-robot SLAM. The thinning algorithm is used to construct the skeleton of the grid map's empty area, and a mobile robot is simulated in one map. The simulated data is used as information sources in the other map to do partial map Monte Carlo localization; if localization succeeds, the relative pose hypotheses between the two maps can be computed easily. We verify these hypotheses using the rendezvous technique and use them as initial values to optimize the estimation by a heuristic random search algorithm. PMID:24302855

  1. Robust, fast, and effective two-dimensional automatic phase unwrapping algorithm based on image decomposition.

    PubMed

    Herráez, Miguel Arevallilo; Gdeisat, Munther A; Burton, David R; Lalor, Michael J

    2002-12-10

    We describe what is to our knowledge a novel approach to phase unwrapping. Using the principle of unwrapping following areas with similar phase values (homogenous areas), the algorithm reacts satisfactorily to random noise and breaks in the wrap distributions. Execution times for a 512 x 512 pixel phase distribution are in the order of a half second on a desktop computer. The precise value depends upon the particular image under analysis. Two inherent parameters allow tuning of the algorithm to images of different quality and nature. PMID:12502302

  2. A fast hidden line algorithm with contour option. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Thue, R. E.

    1984-01-01

    The JonesD algorithm was modified to allow the processing of N-sided elements and implemented in conjunction with a 3-D contour generation algorithm. The total hidden line and contour subsystem is implemented in the MOVIE.BYU Display package, and is compared to the subsystems already existing in the MOVIE.BYU package. The comparison reveals that the modified JonesD hidden line and contour subsystem yields substantial processing time savings, when processing moderate sized models comprised of 1000 elements or less. There are, however, some limitations to the modified JonesD subsystem.

  3. Optimal design of groundwater remediation system using a probabilistic multi-objective fast harmony search algorithm under uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, Qiankun; Wu, Jianfeng; Yang, Yun; Qian, Jiazhong; Wu, Jichun

    2014-11-01

    This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation systems under uncertainty associated with the hydraulic conductivity (K) of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic sorting technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient K data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal design of groundwater remediation systems for a two-dimensional hypothetical test problem and a three-dimensional Indiana field application involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the mass remaining in the aquifer at the end of the operational period, whereby the pump-and-treat (PAT) technology is used to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is employed to evaluate the effectiveness of the proposed methodology. Comprehensive analysis indicates that the proposed PMOFHS can find Pareto-optimal solutions with low variability and high reliability and is a potentially effective tool for optimizing multi-objective groundwater remediation problems under uncertainty.

  4. Movie approximation technique for the implementation of fast bandwidth-smoothing algorithms

    NASA Astrophysics Data System (ADS)

    Feng, Wu-chi; Lam, Chi C.; Liu, Ming

    1997-12-01

    Bandwidth smoothing algorithms can effectively reduce the network resource requirements for the delivery of compressed video streams. For stored video, a large number of bandwidth smoothing algorithms have been introduced that are optimal under certain constraints but require access to all the frame size data in order to achieve their optimal properties. This constraint, however, can be both resource and computationally expensive, especially for moderately priced set-top-boxes. In this paper, we introduce a movie approximation technique for the representation of the frame sizes of a video, reducing the complexity of the bandwidth smoothing algorithms and the amount of frame data that must be transmitted prior to the start of playback. Our results show that the proposed approximation technique can accurately approximate the frame data with a small number of piece-wise linear segments without affecting the performance measures that the bandwidth soothing algorithms are attempting to achieve by more than 1%. In addition, we show that implementations of this technique can speed up execution times by 100 to 400 times, allowing the bandwidth plan calculation times to be reduced to tens of milliseconds. Evaluation using a compressed full-length motion-JPEG video is provided.

  5. Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm

    PubMed Central

    Manzer, Samuel; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Martin

    2015-01-01

    Construction of the exact exchange matrix, K, is typically the rate-determining step in hybrid density functional theory, and therefore, new approaches with increased efficiency are highly desirable. We present a framework with potential for greatly improved efficiency by computing a compressed exchange matrix that yields the exact exchange energy, gradient, and direct inversion of the iterative subspace (DIIS) error vector. The compressed exchange matrix is constructed with one index in the compact molecular orbital basis and the other index in the full atomic orbital basis. To illustrate the advantages, we present a practical algorithm that uses this framework in conjunction with the resolution of the identity (RI) approximation. We demonstrate that convergence using this method, referred to hereafter as occupied orbital RI-K (occ-RI-K), in combination with the DIIS algorithm is well-behaved, that the accuracy of computed energetics is excellent (identical to conventional RI-K), and that significant speedups can be obtained over existing integral-direct and RI-K methods. For a 4400 basis function C68H22 hydrogen-terminated graphene fragment, our algorithm yields a 14 × speedup over the conventional algorithm and a speedup of 3.3 × over RI-K. PMID:26178096

  6. A fast multigrid algorithm for energy minimization under planar density constraints.

    SciTech Connect

    Ron, D.; Safro, I.; Brandt, A.; Mathematics and Computer Science; Weizmann Inst. of Science

    2010-09-07

    The two-dimensional layout optimization problem reinforced by the efficient space utilization demand has a wide spectrum of practical applications. Formulating the problem as a nonlinear minimization problem under planar equality and/or inequality density constraints, we present a linear time multigrid algorithm for solving a correction to this problem. The method is demonstrated in various graph drawing (visualization) instances.

  7. Fast, accurate evaluation of exact exchange: The occ-RI-K algorithm

    SciTech Connect

    Manzer, Samuel; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Martin

    2015-07-14

    Construction of the exact exchange matrix, K, is typically the rate-determining step in hybrid density functional theory, and therefore, new approaches with increased efficiency are highly desirable. We present a framework with potential for greatly improved efficiency by computing a compressed exchange matrix that yields the exact exchange energy, gradient, and direct inversion of the iterative subspace (DIIS) error vector. The compressed exchange matrix is constructed with one index in the compact molecular orbital basis and the other index in the full atomic orbital basis. To illustrate the advantages, we present a practical algorithm that uses this framework in conjunction with the resolution of the identity (RI) approximation. We demonstrate that convergence using this method, referred to hereafter as occupied orbital RI-K (occ-RI-K), in combination with the DIIS algorithm is well-behaved, that the accuracy of computed energetics is excellent (identical to conventional RI-K), and that significant speedups can be obtained over existing integral-direct and RI-K methods. For a 4400 basis function C{sub 68}H{sub 22} hydrogen-terminated graphene fragment, our algorithm yields a 14 × speedup over the conventional algorithm and a speedup of 3.3 × over RI-K.

  8. General purpose algorithms for characterization of slow and fast phase nystagmus

    NASA Technical Reports Server (NTRS)

    Lessard, Charles S.

    1987-01-01

    In the overall aim for a better understanding of the vestibular and optokinetic systems and their roles in space motion sickness, the eye movement responses to various dynamic stimuli are measured. The vestibulo-ocular reflex (VOR) and the optokinetic response, as the eye movement responses are known, consist of slow phase and fast phase nystagmus. The specific objective is to develop software programs necessary to characterize the vestibulo-ocular and optokinetic responses by distinguishing between the two phases of nystagmus. The overall program is to handle large volumes of highly variable data with minimum operator interaction. The programs include digital filters, differentiation, identification of fast phases, and reconstruction of the slow phase with a least squares fit such that sinusoidal or psuedorandom data may be processed with accurate results. The resultant waveform, slow phase velocity eye movements, serves as input data to the spectral analysis programs previously developed for NASA to analyze nystagmus responses to pseudorandom angular velocity inputs.

  9. IQ-TREE: A Fast and Effective Stochastic Algorithm for Estimating Maximum-Likelihood Phylogenies

    PubMed Central

    Nguyen, Lam-Tung; Schmidt, Heiko A.; von Haeseler, Arndt; Minh, Bui Quang

    2015-01-01

    Large phylogenomics data sets require fast tree inference methods, especially for maximum-likelihood (ML) phylogenies. Fast programs exist, but due to inherent heuristics to find optimal trees, it is not clear whether the best tree is found. Thus, there is need for additional approaches that employ different search strategies to find ML trees and that are at the same time as fast as currently available ML programs. We show that a combination of hill-climbing approaches and a stochastic perturbation method can be time-efficiently implemented. If we allow the same CPU time as RAxML and PhyML, then our software IQ-TREE found higher likelihoods between 62.2% and 87.1% of the studied alignments, thus efficiently exploring the tree-space. If we use the IQ-TREE stopping rule, RAxML and PhyML are faster in 75.7% and 47.1% of the DNA alignments and 42.2% and 100% of the protein alignments, respectively. However, the range of obtaining higher likelihoods with IQ-TREE improves to 73.3–97.1%. IQ-TREE is freely available at http://www.cibiv.at/software/iqtree. PMID:25371430

  10. Development of a radiation-hardened SRAM with EDAC algorithm for fast readout CMOS pixel sensors for charged particle tracking

    NASA Astrophysics Data System (ADS)

    Wei, X.; Li, B.; Chen, N.; Wang, J.; Zheng, R.; Gao, W.; Wei, T.; Gao, D.; Hu, Y.

    2014-08-01

    CMOS pixel sensors (CPS) are attractive for use in the innermost particle detectors for charged particle tracking due to their good trade-off between spatial resolution, material budget, radiation hardness, and readout speed. With the requirements of high readout speed and high radiation hardness to total ionizing dose (TID) for particle tracking, fast readout CPS are composed by integrating a data compression block and two SRAM IP cores. However, the radiation hardness of the SRAM IP cores is not as high as that of the other parts in CPS, and thus the radiation hardness of the whole CPS chip is lowered. Especially, when CPS are migrated into 0.18-μm processes, the single event upset (SEU) effects should be also considered besides TID and single event latchup (SEL) effects. This paper presents a radiation-hardened SRAM with enhanced radiation hardness to SEU. An error detection and correction (EDAC) algorithm and a bit-interleaving storage strategy are adopted in the design. The prototype design has been fabricated in a 0.18-μm process. The area of the new SRAM is increased 1.6 times as compared with a non-radiation-hardened SRAM due to the integration of EDAC algorithm and the adoption of radiation hardened layout. The access time is increased from 5 ns to 8 ns due to the integration of EDAC algorithm. The test results indicate that the design satisfy requirements of CPS for charged particle tracking.

  11. A graphical algorithm for fast computation of identity coefficients and generalized kinship coefficients

    PubMed Central

    Abney, Mark

    2009-01-01

    Summary: Computing the probability of identity by descent sharing among n genes given only the pedigree of those genes is a computationally challenging problem, if n or the pedigree size is large. Here, I present a novel graphical algorithm for efficiently computing all generalized kinship coefficients for n genes. The graphical description transforms the problem from doing many recursion on the pedigree to doing a single traversal of a structure referred to as the kinship graph. Availability: The algorithm is implemented for n = 4 in the software package IdCoefs at http://home.uchicago.edu/abney/Software.html. Contact: abney@bsd.uchicago.edu Supplementary Information:Supplementary data are available at Bioinformatics online. PMID:19359355

  12. Fast parallel molecular algorithms for DNA-based computation: factoring integers.

    PubMed

    Chang, Weng-Long; Guo, Minyi; Ho, Michael Shan-Hui

    2005-06-01

    The RSA public-key cryptosystem is an algorithm that converts input data to an unrecognizable encryption and converts the unrecognizable data back into its original decryption form. The security of the RSA public-key cryptosystem is based on the difficulty of factoring the product of two large prime numbers. This paper demonstrates to factor the product of two large prime numbers, and is a breakthrough in basic biological operations using a molecular computer. In order to achieve this, we propose three DNA-based algorithms for parallel subtractor, parallel comparator, and parallel modular arithmetic that formally verify our designed molecular solutions for factoring the product of two large prime numbers. Furthermore, this work indicates that the cryptosystems using public-key are perhaps insecure and also presents clear evidence of the ability of molecular computing to perform complicated mathematical operations. PMID:16117023

  13. Fast algorithm for computing the Abel inversion integral in broadband reflectometry

    SciTech Connect

    Nunes, F.D.

    1995-10-01

    The application of the Hansen--Jablokow recursive technique is proposed for the numerical computation of the Abel inversion integral which is used in ({ital O}-mode) frequency-modulated broadband reflectometry to evaluate plasma density profiles. Compared to the usual numerical methods the recursive algorithm allows substantial time savings that can be important when processing massive amounts of data aiming to control the plasma in real time. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.

  14. A Fourier analysis for a fast simulation algorithm. [for switching converters

    NASA Technical Reports Server (NTRS)

    King, Roger J.

    1988-01-01

    This paper presents a derivation of compact expressions for the Fourier series analysis of the steady-state solution of a typical switching converter. The modeling procedure for the simulation and the steady-state solution is described, and some desirable traits for its matrix exponential subroutine are discussed. The Fourier analysis algorithm was tested on a phase-controlled parallel-loaded resonant converter, providing an experimental confirmation.

  15. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration.

    PubMed

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  16. Fast graphically inspired algorithm for assignment of molecular formulae in ultrahigh resolution mass spectrometry.

    PubMed

    Green, Nelson W; Perdue, E Michael

    2015-05-19

    This study focuses on the deterministic task of assigning molecular formulae to exact masses that are generated by ultrahigh resolution mass spectrometry. A new algorithm based on low-mass moieties (LMMs) such as CH4O(-1) and C4O(-3) completely replaces conventional computational loops that explore a user-defined range of C, H, and O when searching for molecular formulae that have a given exact mass. The LMM-based algorithm has been coupled with a combinatorial algorithm that uses nested loops for N, P, S, and (13)C to assign molecular formulae. The resulting program is more than 1700 times faster than its brute-force counterpart that uses nested loops for all elements, and both programs yield identical output files. The new LMM-based program is 1050 times faster than the open-source program HR2, 60 times faster than Molecular Formula Calculator, and 3.6 times faster than MassCalc/FormCalc. PMID:25857207

  17. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals. PMID:26406525

  18. Validation of Supervised Automated Algorithm for Fast Quantitative Evaluation of Organ Motion on Magnetic Resonance Imaging

    SciTech Connect

    Prakash, Varuna; Stainsby, Jeffrey A.; Satkunasingham, Janakan; Craig, Tim; Catton, Charles; Chan, Philip; Dawson, Laura; Hensel, Jennifer; Jaffray, David; Milosevic, Michael; Nichol, Alan; Sussman, Marshall S.; Lockwood, Gina; Menard, Cynthia

    2008-07-15

    Purpose: To validate a correlation coefficient template-matching algorithm applied to the supervised automated quantification of abdominal-pelvic organ motion captured on time-resolved magnetic resonance imaging. Methods and Materials: Magnetic resonance images of 21 patients across four anatomic sites were analyzed. Representative anatomic points of interest were chosen as surrogates for organ motion. The point of interest displacements across each image frame relative to baseline were quantified manually and through the use of a template-matching software tool, termed 'Motiontrack.' Automated and manually acquired displacement measures, as well as the standard deviation of intrafraction motion, were compared for each image frame and for each patient. Results: Discrepancies between the automated and manual displacements of {>=}2 mm were uncommon, ranging in frequency of 0-9.7% (liver and prostate, respectively). The standard deviations of intrafraction motion measured with each method correlated highly (r = 0.99). Considerable interpatient variability in organ motion was demonstrated by a wide range of standard deviations in the liver (1.4-7.5 mm), uterus (1.1-8.4 mm), and prostate gland (0.8-2.7 mm). The automated algorithm performed successfully in all patients but 1 and substantially improved efficiency compared with manual quantification techniques (5 min vs. 60-90 min). Conclusion: Supervised automated quantification of organ motion captured on magnetic resonance imaging using a correlation coefficient template-matching algorithm was efficient, accurate, and may play an important role in off-line adaptive approaches to intrafraction motion management.

  19. A Fast Algorithm to Estimate the Deepest Points of Lakes for Regional Lake Registration

    PubMed Central

    Shen, Zhanfeng; Yu, Xinju; Sheng, Yongwei; Li, Junli; Luo, Jiancheng

    2015-01-01

    When conducting image registration in the U.S. state of Alaska, it is very difficult to locate satisfactory ground control points because ice, snow, and lakes cover much of the ground. However, GCPs can be located by seeking stable points from the extracted lake data. This paper defines a process to estimate the deepest points of lakes as the most stable ground control points for registration. We estimate the deepest point of a lake by computing the center point of the largest inner circle (LIC) of the polygon representing the lake. An LIC-seeking method based on Voronoi diagrams is proposed, and an algorithm based on medial axis simplification (MAS) is introduced. The proposed design also incorporates parallel data computing. A key issue of selecting a policy for partitioning vector data is carefully studied, the selected policy that equalize the algorithm complexity is proved the most optimized policy for vector parallel processing. Using several experimental applications, we conclude that the presented approach accurately estimates the deepest points in Alaskan lakes; furthermore, we gain perfect efficiency using MAS and a policy of algorithm complexity equalization. PMID:26656598

  20. Fast parallel algorithms that compute transitive closure of a fuzzy relation

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.

    1993-01-01

    The notion of a transitive closure of a fuzzy relation is very useful for clustering in pattern recognition, for fuzzy databases, etc. The original algorithm proposed by L. Zadeh (1971) requires the computation time O(n(sup 4)), where n is the number of elements in the relation. In 1974, J. C. Dunn proposed a O(n(sup 2)) algorithm. Since we must compute n(n-1)/2 different values s(a, b) (a not equal to b) that represent the fuzzy relation, and we need at least one computational step to compute each of these values, we cannot compute all of them in less than O(n(sup 2)) steps. So, Dunn's algorithm is in this sense optimal. For small n, it is ok. However, for big n (e.g., for big databases), it is still a lot, so it would be desirable to decrease the computation time (this problem was formulated by J. Bezdek). Since this decrease cannot be done on a sequential computer, the only way to do it is to use a computer with several processors working in parallel. We show that on a parallel computer, transitive closure can be computed in time O((log(sub 2)(n))2).

  1. [A fast algorithm to build a supertree with a set of gene trees].

    PubMed

    Gorbunov, K Iu; Liubetskiĭ, V A

    2012-01-01

    Important desired properties of an algorithm to construct a supertree (species tree) by reconciling input trees are its low complexity and applicability to large biological data. In its common statement the problem is proved to be NP-hard, i.e. to have an exponential complexity in practice. We propose a reformulation of the supertree building problem that allows a computationally effective solution. We introduce a biologically natural requirement that the supertree is sought for such that it does not contain clades incompatible with those existing in the input trees. The algorithm was tested with simulated and biological trees and was shown to possess an almost square complexity even if horizontal transfers are allowed. If HGTs are not assumed, the algorithm is mathematically correct and possesses the longest running time of n3 x[V0]3, where n is the number of input trees and [V0] is the total number of species. The authors are unaware of analogous solutions in published evidence. The corresponding inferring program, its usage examples and manual are freely available at http://lab6.iitp.ru/en/super3gl. The available program does not implement HGTs. The generalized case is described in the publication "A tree nearest in average to a set of trees" (Information Transmission Problems, 2011). PMID:22642116

  2. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaginga)

    PubMed Central

    Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B.; Jia, Xun

    2014-01-01

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  3. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    SciTech Connect

    Yan, Hao; Folkerts, Michael; Jiang, Steve B. E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun E-mail: steve.jiang@UTSouthwestern.edu; Zhen, Xin; Li, Yongbao; Pan, Tinsu; Cervino, Laura

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  4. DeltAMT: A Statistical Algorithm for Fast Detection of Protein Modifications From LC-MS/MS Data*

    PubMed Central

    Fu, Yan; Xiu, Li-Yun; Jia, Wei; Ye, Ding; Sun, Rui-Xiang; Qian, Xiao-Hong; He, Si-Min

    2011-01-01

    Identification of proteins and their modifications via liquid chromatography-tandem mass spectrometry is an important task for the field of proteomics. However, because of the complexity of tandem mass spectra, the majority of the spectra cannot be identified. The presence of unanticipated protein modifications is among the major reasons for the low spectral identification rate. The conventional database search approach to protein identification has inherent difficulties in comprehensive detection of protein modifications. In recent years, increasing efforts have been devoted to developing unrestrictive approaches to modification identification, but they often suffer from their lack of speed. This paper presents a statistical algorithm named DeltAMT (Delta Accurate Mass and Time) for fast detection of abundant protein modifications from tandem mass spectra with high-accuracy precursor masses. The algorithm is based on the fact that the modified and unmodified versions of a peptide are usually present simultaneously in a sample and their spectra are correlated with each other in precursor masses and retention times. By representing each pair of spectra as a delta mass and time vector, bivariate Gaussian mixture models are used to detect modification-related spectral pairs. Unlike previous approaches to unrestrictive modification identification that mainly rely upon the fragment information and the mass dimension in liquid chromatography-tandem mass spectrometry, the proposed algorithm makes the most of precursor information. Thus, it is highly efficient while being accurate and sensitive. On two published data sets, the algorithm effectively detected various modifications and other interesting events, yielding deep insights into the data. Based on these discoveries, the spectral identification rates were significantly increased and many modified peptides were identified. PMID:21321130

  5. PSimScan: Algorithm and Utility for Fast Protein Similarity Search

    PubMed Central

    Kaznadzey, Anna; Alexandrova, Natalia; Novichkov, Vladimir; Kaznadzey, Denis

    2013-01-01

    In the era of metagenomics and diagnostics sequencing, the importance of protein comparison methods of boosted performance cannot be overstated. Here we present PSimScan (Protein Similarity Scanner), a flexible open source protein similarity search tool which provides a significant gain in speed compared to BLASTP at the price of controlled sensitivity loss. The PSimScan algorithm introduces a number of novel performance optimization methods that can be further used by the community to improve the speed and lower hardware requirements of bioinformatics software. The optimization starts at the lookup table construction, then the initial lookup table–based hits are passed through a pipeline of filtering and aggregation routines of increasing computational complexity. The first step in this pipeline is a novel algorithm that builds and selects ‘similarity zones’ aggregated from neighboring matches on small arrays of adjacent diagonals. PSimScan performs 5 to 100 times faster than the standard NCBI BLASTP, depending on chosen parameters, and runs on commodity hardware. Its sensitivity and selectivity at the slowest settings are comparable to the NCBI BLASTP’s and decrease with the increase of speed, yet stay at the levels reasonable for many tasks. PSimScan is most advantageous when used on large collections of query sequences. Comparing the entire proteome of Streptocuccus pneumoniae (2,042 proteins) to the NCBI’s non-redundant protein database of 16,971,855 records takes 6.5 hours on a moderately powerful PC, while the same task with the NCBI BLASTP takes over 66 hours. We describe innovations in the PSimScan algorithm in considerable detail to encourage bioinformaticians to improve on the tool and to use the innovations in their own software development. PMID:23505522

  6. Fast and robust segmentation of solar EUV images: algorithm and results for solar cycle 23

    NASA Astrophysics Data System (ADS)

    Barra, V.; Delouille, V.; Kretzschmar, M.; Hochedez, J.-F.

    2009-10-01

    Context: The study of the variability of the solar corona and the monitoring of coronal holes, quiet sun and active regions are of great importance in astrophysics as well as for space weather and space climate applications. Aims: In a previous work, we presented the spatial possibilistic clustering algorithm (SPoCA). This is a multi-channel unsupervised spatially-constrained fuzzy clustering method that automatically segments solar extreme ultraviolet (EUV) images into regions of interest. The results we reported on SoHO-EIT images taken from February 1997 to May 2005 were consistent with previous knowledge in terms of both areas and intensity estimations. However, they presented some artifacts due to the method itself. Methods: Herein, we propose a new algorithm, based on SPoCA, that removes these artifacts. We focus on two points: the definition of an optimal clustering with respect to the regions of interest, and the accurate definition of the cluster edges. We moreover propose methodological extensions to this method, and we illustrate these extensions with the automatic tracking of active regions. Results: The much improved algorithm can decompose the whole set of EIT solar images over the 23rd solar cycle into regions that can clearly be identified as quiet sun, coronal hole and active region. The variations of the parameters resulting from the segmentation, i.e. the area, mean intensity, and relative contribution to the solar irradiance, are consistent with previous results and thus validate the decomposition. Furthermore, we find indications for a small variation of the mean intensity of each region in correlation with the solar cycle. Conclusions: The method is generic enough to allow the introduction of other channels or data. New applications are now expected, e.g. related to SDO-AIA data.

  7. Murasaki: A Fast, Parallelizable Algorithm to Find Anchors from Multiple Genomes

    PubMed Central

    Popendorf, Kris; Tsuyoshi, Hachiya; Osana, Yasunori; Sakakibara, Yasubumi

    2010-01-01

    Background With the number of available genome sequences increasing rapidly, the magnitude of sequence data required for multiple-genome analyses is a challenging problem. When large-scale rearrangements break the collinearity of gene orders among genomes, genome comparison algorithms must first identify sets of short well-conserved sequences present in each genome, termed anchors. Previously, anchor identification among multiple genomes has been achieved using pairwise alignment tools like BLASTZ through progressive alignment tools like TBA, but the computational requirements for sequence comparisons of multiple genomes quickly becomes a limiting factor as the number and scale of genomes grows. Methodology/Principal Findings Our algorithm, named Murasaki, makes it possible to identify anchors within multiple large sequences on the scale of several hundred megabases in few minutes using a single CPU. Two advanced features of Murasaki are (1) adaptive hash function generation, which enables efficient use of arbitrary mismatch patterns (spaced seeds) and therefore the comparison of multiple mammalian genomes in a practical amount of computation time, and (2) parallelizable execution that decreases the required wall-clock and CPU times. Murasaki can perform a sensitive anchoring of eight mammalian genomes (human, chimp, rhesus, orangutan, mouse, rat, dog, and cow) in 21 hours CPU time (42 minutes wall time). This is the first single-pass in-core anchoring of multiple mammalian genomes. We evaluated Murasaki by comparing it with the genome alignment programs BLASTZ and TBA. We show that Murasaki can anchor multiple genomes in near linear time, compared to the quadratic time requirements of BLASTZ and TBA, while improving overall accuracy. Conclusions/Significance Murasaki provides an open source platform to take advantage of long patterns, cluster computing, and novel hash algorithms to produce accurate anchors across multiple genomes with computational efficiency

  8. KD-tree based clustering algorithm for fast face recognition on large-scale data

    NASA Astrophysics Data System (ADS)

    Wang, Yuanyuan; Lin, Yaping; Yang, Junfeng

    2015-07-01

    This paper proposes an acceleration method for large-scale face recognition system. When dealing with a large-scale database, face recognition is time-consuming. In order to tackle this problem, we employ the k-means clustering algorithm to classify face data. Specifically, the data in each cluster are stored in the form of the kd-tree, and face feature matching is conducted with the kd-tree based nearest neighborhood search. Experiments on CAS-PEAL and self-collected database show the effectiveness of our proposed method.

  9. Note: Fast imaging of DNA in atomic force microscopy enabled by a local raster scan algorithm

    SciTech Connect

    Huang, Peng; Andersson, Sean B.

    2014-06-15

    Approaches to high-speed atomic force microscopy typically involve some combination of novel mechanical design to increase the physical bandwidth and advanced controllers to take maximum advantage of the physical capabilities. For certain classes of samples, however, imaging time can be reduced on standard instruments by reducing the amount of measurement that is performed to image the sample. One such technique is the local raster scan algorithm, developed for imaging of string-like samples. Here we provide experimental results on the use of this technique to image DNA samples, demonstrating the efficacy of the scheme and illustrating the order-of-magnitude improvement in imaging time that it provides.

  10. Note: Fast imaging of DNA in atomic force microscopy enabled by a local raster scan algorithm

    PubMed Central

    Huang, Peng; Andersson, Sean B.

    2014-01-01

    Approaches to high-speed atomic force microscopy typically involve some combination of novel mechanical design to increase the physical bandwidth and advanced controllers to take maximum advantage of the physical capabilities. For certain classes of samples, however, imaging time can be reduced on standard instruments by reducing the amount of measurement that is performed to image the sample. One such technique is the local raster scan algorithm, developed for imaging of string-like samples. Here we provide experimental results on the use of this technique to image DNA samples, demonstrating the efficacy of the scheme and illustrating the order-of-magnitude improvement in imaging time that it provides. PMID:24985865

  11. Fast algorithms for nonconvex compression sensing: MRI reconstruction from very few data

    SciTech Connect

    Chartrand, Rick

    2009-01-01

    Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

  12. Parrallel Implementation of Fast Randomized Algorithms for Low Rank Matrix Decomposition

    SciTech Connect

    Lucas, Andrew J.; Stalizer, Mark; Feo, John T.

    2014-03-01

    We analyze the parallel performance of randomized interpolative decomposition by de- composing low rank complex-valued Gaussian random matrices larger than 100 GB. We chose a Cray XMT supercomputer as it provides an almost ideal PRAM model permitting quick investigation of parallel algorithms without obfuscation from hardware idiosyncrasies. We obtain that on non-square matrices performance scales almost linearly with runtime about 100 times faster on 128 processors. We also verify that numerically discovered error bounds still hold on matrices two orders of magnitude larger than those previously tested.

  13. A very simple and fast way to access and validate algorithms in reproducible research.

    PubMed

    Stegmayer, Georgina; Pividori, Milton; Milone, Diego H

    2016-01-01

    The reproducibility of research in bioinformatics refers to the notion that new methodologies/algorithms and scientific claims have to be published together with their data and source code, in a way that other researchers may verify the findings to further build more knowledge on them. The replication and corroboration of research results are key to the scientific process, and many journals are discussing the matter nowadays, taking concrete steps in this direction. In this journal itself, a recent opinion note has appeared highlighting the increasing importance of this topic in bioinformatics and computational biology, inviting the community to further discuss the matter. In agreement with that article, we would like to propose here another step into that direction with a tool that allows the automatic generation of a web interface, named web-demo, directly from source code in a simple and straightforward way. We believe this contribution can help make research not only reproducible but also more easily accessible. A web-demo associated to a published paper can accelerate an algorithm validation with real data, wide-spreading its use with just a few clicks. PMID:26223526

  14. Lamb waves based fast subwavelength imaging using a DORT-MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    He, Jiaze; Yuan, Fuh-Gwo

    2016-02-01

    A Lamb wave-based, subwavelength imaging algorithm is developed for damage imaging in large-scale, plate-like structures based on a decomposition of the time-reversal operator (DORT) method combined with the multiple signal classification (MUSIC) algorithm in the space-frequency domain. In this study, a rapid, hybrid non-contact scanning system was proposed to image an aluminum plate using a piezoelectric linear array for actuation and a laser Doppler vibrometer (LDV) line-scan for sensing. The physics of wave propagation, reflection, and scattering that underlies the response matrix in the DORT method is mathematically formulated in the context of guided waves. The singular value decomposition (SVD) and MUSIC-based imaging condition enable quantifying the damage severity by a `reflectivity' parameter and super-resolution imaging. With the flexibility of this scanning system, a considerably large area can be imaged using lower frequency Lamb waves with limited line-scans. The experimental results showed that the hardware system with a signal processing tool such as the DORT-MUSIC (TR-MUSIC) imaging technique can provide rapid, highly accurate imaging results as well as damage quantification with unknown material properties.

  15. A Fast Inspection of Tool Electrode and Drilling Depth in EDM Drilling by Detection Line Algorithm

    PubMed Central

    Huang, Kuo-Yi

    2008-01-01

    The purpose of this study was to develop a novel measurement method using a machine vision system. Besides using image processing techniques, the proposed system employs a detection line algorithm that detects the tool electrode length and drilling depth of a workpiece accurately and effectively. Different boundaries of areas on the tool electrode are defined: a baseline between base and normal areas, a ND-line between normal and drilling areas (accumulating carbon area), and a DD-line between drilling area and dielectric fluid droplet on the electrode tip. Accordingly, image processing techniques are employed to extract a tool electrode image, and the centroid, eigenvector, and principle axis of the tool electrode are determined. The developed detection line algorithm (DLA) is then used to detect the baseline, ND-line, and DD-line along the direction of the principle axis. Finally, the tool electrode length and drilling depth of the workpiece are estimated via detected baseline, ND-line, and DD-line. Experimental results show good accuracy and efficiency in estimation of the tool electrode length and drilling depth under different conditions. Hence, this research may provide a reference for industrial application in EDM drilling measurement.

  16. A fast adaptive convex hull algorithm on two-dimensional processor arrays with a reconfigurable BUS system

    NASA Technical Reports Server (NTRS)

    Olariu, S.; Schwing, J.; Zhang, J.

    1991-01-01

    A bus system that can change dynamically to suit computational needs is referred to as reconfigurable. We present a fast adaptive convex hull algorithm on a two-dimensional processor array with a reconfigurable bus system (2-D PARBS, for short). Specifically, we show that computing the convex hull of a planar set of n points taken O(log n/log m) time on a 2-D PARBS of size mn x n with 3 less than or equal to m less than or equal to n. Our result implies that the convex hull of n points in the plane can be computed in O(1) time in a 2-D PARBS of size n(exp 1.5) x n.

  17. Fast Estimation of Defect Profiles from the Magnetic Flux Leakage Signal Based on a Multi-Power Affine Projection Algorithm

    PubMed Central

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-01-01

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314

  18. Fast estimation of defect profiles from the magnetic flux leakage signal based on a multi-power affine projection algorithm.

    PubMed

    Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang

    2014-01-01

    Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314

  19. A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations

    SciTech Connect

    Rockway, J D; Champagne, N J; Sharpe, R M; Fasenfest, B

    2004-01-14

    Frequency domain techniques are popular for analyzing electromagnetics (EM) and coupled circuit-EM problems. These techniques, such as the method of moments (MoM) and the finite element method (FEM), are used to determine the response of the EM portion of the problem at a single frequency. Since only one frequency is solved at a time, it may take a long time to calculate the parameters for wideband devices. In this paper, a fast frequency sweep based on the Asymptotic Wave Expansion (AWE) method is developed and applied to generalized mixed circuit-EM problems. The AWE method, which was originally developed for lumped-load circuit simulations, has recently been shown to be effective at quasi-static and low frequency full-wave simulations. Here it is applied to a full-wave MoM solver, capable of solving for metals, dielectrics, and coupled circuit-EM problems.

  20. Fast planar-oriented ripple search algorithm for hyperspace VQ codebook.

    PubMed

    Chang, Chin-Chen; Wu, Wen-Chuan

    2007-06-01

    This paper presents a fast codebook search method for improving the quantization complexity of full-search vector quantization (VQ). The proposed method is built on the planar Voronoi diagram to label a ripple search domain. Then, the appropriate codeword can easily be found just by searching the local region instead of global exploration. In order to take a step further and obtain the close result full-search VQ would, we equip the proposed method with a duplication mechanism that helps to bring down the possible quantizing distortion to its lowest level. According to the experimental results, the proposed method is indeed capable of providing better outcome at a faster quantization speed than the existing partial-search methods. Moreover, the proposed method only requires a little extra storage for duplication. PMID:17547132

  1. Efficient fast heuristic algorithms for minimum error correction haplotyping from SNP fragments.

    PubMed

    Anaraki, Maryam Pourkamali; Sadeghi, Mehdi

    2014-01-01

    Availability of complete human genome is a crucial factor for genetic studies to explore possible association between the genome and complex diseases. Haplotype, as a set of single nucleotide polymorphisms (SNPs) on a single chromosome, is believed to contain promising data for disease association studies, detecting natural positive selection and recombination hotspots. Various computational methods for haplotype reconstruction from aligned fragment of SNPs have already been proposed. This study presents a novel approach to obtain paternal and maternal haplotypes form the SNP fragments on minimum error correction (MEC) model. Reconstructing haplotypes in MEC model is an NP-hard problem. Therefore, our proposed methods employ two fast and accurate clustering techniques as the core of their procedure to efficiently solve this ill-defined problem. The assessment of our approaches, compared to conventional methods, on two real benchmark datasets, i.e., ACE and DALY, proves the efficiency and accuracy. PMID:25539847

  2. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements

    NASA Technical Reports Server (NTRS)

    Laviola, Sante; Levizzani, Vincenzo

    2014-01-01

    The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL

  3. Sequential quadratic programming-based fast path planning algorithm subject to no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang

    2016-08-01

    Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.

  4. A Fast and Scalable Kymograph Alignment Algorithm for Nanochannel-Based Optical DNA Mappings

    PubMed Central

    Noble, Charleston; Nilsson, Adam N.; Freitag, Camilla; Beech, Jason P.; Tegenfeldt, Jonas O.; Ambjörnsson, Tobias

    2015-01-01

    Optical mapping by direct visualization of individual DNA molecules, stretched in nanochannels with sequence-specific fluorescent labeling, represents a promising tool for disease diagnostics and genomics. An important challenge for this technique is thermal motion of the DNA as it undergoes imaging; this blurs fluorescent patterns along the DNA and results in information loss. Correcting for this effect (a process referred to as kymograph alignment) is a common preprocessing step in nanochannel-based optical mapping workflows, and we present here a highly efficient algorithm to accomplish this via pattern recognition. We compare our method with the one previous approach, and we find that our method is orders of magnitude faster while producing data of similar quality. We demonstrate proof of principle of our approach on experimental data consisting of melt mapped bacteriophage DNA. PMID:25875920

  5. Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays.

    PubMed

    Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin

    2016-01-01

    In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301

  6. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  7. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    SciTech Connect

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array. Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.

  8. Fast 2D DOA Estimation Algorithm by an Array Manifold Matching Method with Parallel Linear Arrays

    PubMed Central

    Yang, Lisheng; Liu, Sheng; Li, Dong; Jiang, Qingping; Cao, Hailin

    2016-01-01

    In this paper, the problem of two-dimensional (2D) direction-of-arrival (DOA) estimation with parallel linear arrays is addressed. Two array manifold matching (AMM) approaches, in this work, are developed for the incoherent and coherent signals, respectively. The proposed AMM methods estimate the azimuth angle only with the assumption that the elevation angles are known or estimated. The proposed methods are time efficient since they do not require eigenvalue decomposition (EVD) or peak searching. In addition, the complexity analysis shows the proposed AMM approaches have lower computational complexity than many current state-of-the-art algorithms. The estimated azimuth angles produced by the AMM approaches are automatically paired with the elevation angles. More importantly, for estimating the azimuth angles of coherent signals, the aperture loss issue is avoided since a decorrelation procedure is not required for the proposed AMM method. Numerical studies demonstrate the effectiveness of the proposed approaches. PMID:26907301

  9. Fast Nonnegative Matrix Factorization Algorithms Using Projected Gradient Approaches for Large-Scale Problems

    PubMed Central

    Zdunek, Rafal; Cichocki, Andrzej

    2008-01-01

    Recently, a considerable growth of interest in projected gradient (PG) methods has been observed due to their high efficiency in solving large-scale convex minimization problems subject to linear constraints. Since the minimization problems underlying nonnegative matrix factorization (NMF) of large matrices well matches this class of minimization problems, we investigate and test some recent PG methods in the context of their applicability to NMF. In particular, the paper focuses on the following modified methods: projected Landweber, Barzilai-Borwein gradient projection, projected sequential subspace optimization (PSESOP), interior-point Newton (IPN), and sequential coordinate-wise. The proposed and implemented NMF PG algorithms are compared with respect to their performance in terms of signal-to-interference ratio (SIR) and elapsed time, using a simple benchmark of mixed partially dependent nonnegative signals. PMID:18628948

  10. A Fast Semiautomatic Algorithm for Centerline-Based Vocal Tract Segmentation.

    PubMed

    Poznyakovskiy, Anton A; Mainka, Alexander; Platzek, Ivan; Mürbe, Dirk

    2015-01-01

    Vocal tract morphology is an important factor in voice production. Its analysis has potential implications for educational matters as well as medical issues like voice therapy. The knowledge of the complex adjustments in the spatial geometry of the vocal tract during phonation is still limited. For a major part, this is due to difficulties in acquiring geometry data of the vocal tract in the process of voice production. In this study, a centerline-based segmentation method using active contours was introduced to extract the geometry data of the vocal tract obtained with MRI during sustained vowel phonation. The applied semiautomatic algorithm was found to be time- and interaction-efficient and allowed performing various three-dimensional measurements on the resulting model. The method is suitable for an improved detailed analysis of the vocal tract morphology during speech or singing which might give some insights into the underlying mechanical processes. PMID:26557710

  11. A Fast Semiautomatic Algorithm for Centerline-Based Vocal Tract Segmentation

    PubMed Central

    Poznyakovskiy, Anton A.; Mainka, Alexander; Platzek, Ivan; Mürbe, Dirk

    2015-01-01

    Vocal tract morphology is an important factor in voice production. Its analysis has potential implications for educational matters as well as medical issues like voice therapy. The knowledge of the complex adjustments in the spatial geometry of the vocal tract during phonation is still limited. For a major part, this is due to difficulties in acquiring geometry data of the vocal tract in the process of voice production. In this study, a centerline-based segmentation method using active contours was introduced to extract the geometry data of the vocal tract obtained with MRI during sustained vowel phonation. The applied semiautomatic algorithm was found to be time- and interaction-efficient and allowed performing various three-dimensional measurements on the resulting model. The method is suitable for an improved detailed analysis of the vocal tract morphology during speech or singing which might give some insights into the underlying mechanical processes. PMID:26557710

  12. Fast algorithms for visualizing fluid motion in steady flow on unstructured grids

    NASA Technical Reports Server (NTRS)

    Ueng, S. K.; Sikorski, K.; Ma, Kwan-Liu

    1995-01-01

    The plotting of streamlines is an effective way of visualizing fluid motion in steady flows. Additional information about the flowfield, such as local rotation and expansion, can be shown by drawing in the form of a ribbon or tube. In this paper, we present efficient algorithms for the construction of streamlines, streamribbons and streamtubes on unstructured grids. A specialized version of the Runge-Kutta method has been developed to speed up the integration of particle paths. We have also derived closed-form solutions for calculating angular rotation rate and radius to construct streamribbons and streamtubes, respectively. According to our analysis and test results, these formulations are two to four times better in performance than previous numerical methods. As a large number of traces are calculated, the improved performance could be significant.

  13. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network

    PubMed Central

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the “ground truth.” Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively. PMID:27597960

  14. Liver Tumor Segmentation from MR Images Using 3D Fast Marching Algorithm and Single Hidden Layer Feedforward Neural Network.

    PubMed

    Le, Trong-Ngoc; Bao, Pham The; Huynh, Hieu Trung

    2016-01-01

    Objective. Our objective is to develop a computerized scheme for liver tumor segmentation in MR images. Materials and Methods. Our proposed scheme consists of four main stages. Firstly, the region of interest (ROI) image which contains the liver tumor region in the T1-weighted MR image series was extracted by using seed points. The noise in this ROI image was reduced and the boundaries were enhanced. A 3D fast marching algorithm was applied to generate the initial labeled regions which are considered as teacher regions. A single hidden layer feedforward neural network (SLFN), which was trained by a noniterative algorithm, was employed to classify the unlabeled voxels. Finally, the postprocessing stage was applied to extract and refine the liver tumor boundaries. The liver tumors determined by our scheme were compared with those manually traced by a radiologist, used as the "ground truth." Results. The study was evaluated on two datasets of 25 tumors from 16 patients. The proposed scheme obtained the mean volumetric overlap error of 27.43% and the mean percentage volume error of 15.73%. The mean of the average surface distance, the root mean square surface distance, and the maximal surface distance were 0.58 mm, 1.20 mm, and 6.29 mm, respectively. PMID:27597960

  15. A novel multi-aperture based sun sensor based on a fast multi-point MEANSHIFT (FMMS) algorithm.

    PubMed

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  16. A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm

    PubMed Central

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  17. Optimal design of groundwater remediation systems using a probabilistic multi-objective fast harmony search algorithm under uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, Q.; Wu, J.; Qian, J.

    2013-12-01

    This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation system under uncertainty associated with the hydraulic conductivity of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic Pareto domination ranking and probabilistic niche technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient hydraulic conductivity data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal groundwater remediation system of a two-dimensional hypothetical test problem involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the percentage of mass remaining in the aquifer at the end of the operational period, which uses the Pump-and-Treat (PAT) technology to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is used to demonstrate the effectiveness of the proposed methodology. The MC analysis is taken to each Pareto solutions for every K realization. Then the statistical mean and the upper and lower bounds of uncertainty intervals of 95% confidence level are calculated. The MC analysis results show that all of the Pareto-optimal solutions are located between the upper and lower bounds of the MC analysis. Moreover, the root mean square errors (RMSEs) between the Pareto-optimal solutions by the PMOFHS and the average values of optimal solutions by the MC analysis are 0.0204 for the first objective and 0.0318 for the second objective, quite smaller than those RMSEs between the results by the existing probabilistic multi-objective genetic algorithm (PMOGA) and the MC analysis, 0.0384 and 0.0397, respectively. In

  18. Fast algorithm for nonlinear acoustics and high-intensity focused ultrasound modeling

    NASA Astrophysics Data System (ADS)

    Curra, Francesco P.; Kargl, Steven G.; Crum, Lawrence A.

    2001-05-01

    The inhomogeneous characteristics of biological media and the nonlinear nature of sound propagation at high-intensity focused ultrasound (HIFU) regimes make accurate modeling of real HIFU applications a challenging task in terms of computational time and resources. A fast, dynamically adaptive time-domain method that drastically reduces these pitfalls is presented for the solution of multidimensional HIFU problems in complex geometries. The model, based on lifted interpolating second-generation wavelets in a collocation approach, consists of the coupled solution of the full-wave nonlinear equation of sound with the bioheat equation for temperature computation. It accounts for nonlinear acoustic propagation, arbitrary frequency power law for attenuation, multiple reflections, and backscattered fields. The characteristic localization of wavelets in both space and wave number domains allows for accurate simulations of strong material inhomogeneities and steep nonlinear processes at a reduced number of collocation points, while the natural multiresolution analysis of wavelets decomposition introduces automatic grid refinement in regions where localized structures are present. Compared to standard finite-difference or spectral schemes on uniform fine grids, this method shows significant savings in computational time and memory requirements proportional with the dimensionality of the problem. [Work supported by U.S. Army Medical Research Acquisition Activity through the University.

  19. A fast algorithm for control and estimation using a polynomial state-space structure

    NASA Technical Reports Server (NTRS)

    Shults, James R.; Brubaker, Thomas; Lee, Gordon K. F.

    1991-01-01

    One of the major problems associated with the control of flexible structures is the estimation of system states. Since the parameters of the structures are not constant under varying loads and conditions, conventional fixed parameter state estimators can not be used to effectively estimate the states of the system. One alternative is to use a state estimator which adapts to the condition of the system. One such estimator is the Kalman filter. This filter is a time varying recursive digital filter which is based upon a model of the system being measured. This filter adapts the model according to the output of the system. Previously, the Kalman filter has only been used in an off-line capacity due to the computational time required for implementation. With recent advances in computer technology, it is becoming a viable tool for use in the on-line environment. A distributed Kalman filter implementation is described for fast estimation of the state of a flexible arm. A key issue, is the sensor structure and initial work on a distributed sensor that could be used with the Kalman filter is presented.

  20. Fast algorithms for crack simulation and identification in eddy current testing

    NASA Astrophysics Data System (ADS)

    Albanese, R.; Rubinacci, G.; Tamburrino, A.; Villone, F.

    2000-05-01

    Integral formulations are well suited for electromagnetic analysis of NDT problems. We use a method in which the unknowns are a two-component vector potential T defined in the conducting region Vc (where the current density J is given by its curl). The current density vector potential is expanded in terms of edge-element basis functions Tk, and the gauge is imposed by means of a tree-cotree decomposition of the finite element mesh. Electric constitutive equation is imposed using Galerkin approach: ∫ Vc∇xTkṡ(ηJ+∂A/∂t)dV=0, ∀Tk; where A is the magnetic vector potential (obtained from J via Biot-Savart law), η is the resistivity and t is the time. Using superposition, the forward problem is reformulated as the determination of the modified eddy current pattern δJ=J-Jo (Jo is the unperturbed current density whereas δJ=∑k=l,nδIkJk is the perturbation due to the crack). In the crack region, identified by a number of elements or element facets, we impose δJ=Jo. For the inverse problem, on the basis of a priori information, we first select a subdomain including a number of "candidate" elements or facets. We select a tentative subset and perform the direct analysis. The inverse problem can be then reformulated as finding which elements or facets of the tentative set actually belong to the crack. Pre-computing all the matrices related to the crack-free zone of the conductor, each single computation for a given tentative crack pattern is very quick (Woodbury's algorithm). This approach is well suited for zero order minimization procedures (e.g., genetic algorithms). The problem can also be reformulated as finding the crack depth as a function of the scanning plane co-ordinates. In this case, quantization (limitation to a set of few possible depth values) and truncation (obtained by neglecting the long distance interactions) allow us to limit the search space and apply techniques initially developed for digital communication over noisy channels [3]. The

  1. Towards fast and accurate algorithms for processing fuzzy data: interval computations revisited

    NASA Astrophysics Data System (ADS)

    Xiang, Gang; Kreinovich, Vladik

    2013-02-01

    In many practical applications, we need to process data, e.g. to predict the future values of different quantities based on their current values. Often, the only information that we have about the current values comes from experts, and is described in informal ('fuzzy') terms like 'small'. To process such data, it is natural to use fuzzy techniques, techniques specifically designed by Lotfi Zadeh to handle such informal information. In this survey, we start by revisiting the motivation behind Zadeh's formulae for processing fuzzy data, and explain how the algorithmic problem of processing fuzzy data can be described in terms of interval computations (α-cuts). Many fuzzy practitioners claim 'I tried interval computations, they did not work' - meaning that they got estimates which are much wider than the desired α-cuts. We show that such statements are usually based on a (widely spread) misunderstanding - that interval computations simply mean replacing each arithmetic operation with the corresponding operation with intervals. We show that while such straightforward interval techniques indeed often lead to over-wide estimates, the current advanced interval computations techniques result in estimates which are much more accurate. We overview such advanced interval computations techniques, and show that by using them, we can efficiently and accurately process fuzzy data. We wrote this survey with three audiences in mind. First, we want fuzzy researchers and practitioners to understand the current advanced interval computations techniques and to use them to come up with faster and more accurate algorithms for processing fuzzy data. For this 'fuzzy' audience, we explain these current techniques in detail. Second, we also want interval researchers to better understand this important application area for their techniques. For this 'interval' audience, we want to explain where fuzzy techniques come from, what are possible variants of these techniques, and what are the

  2. Development of fast line scanning imaging algorithm for diseased chicken detection

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Chieh; Chao, Kuanglin; Chen, Yud-Ren; Kim, Moon S.

    2005-11-01

    A hyperspectral line-scan imaging system for automated inspection of wholesome and diseased chickens was developed and demonstrated. The hyperspectral imaging system consisted of an electron-multiplying charge-coupled-device (EMCCD) camera and an imaging spectrograph. The system used a spectrograph to collect spectral measurements across a pixel-wide vertical linear field of view through which moving chicken carcasses passed. After a series of image calibration procedures, the hyperspectral line-scan images were collected for chickens on a laboratory simulated processing line. From spectral analysis, four key wavebands for differentiating between wholesome and systemically diseased chickens were selected: 413 nm, 472 nm, 515 nm, and 546 nm, and a reference waveband, 622 nm. The ratio of relative reflectance between each key wavelength and the reference wavelength was calculated as an image feature. A fuzzy logic-based algorithm utilizing the key wavebands was developed to identify individual pixels on the chicken surface exhibiting symptoms of systemic disease. Two differentiation methods were built to successfully differentiate 72 systemically diseased chickens from 65 wholesome chickens.

  3. A Fast Algorithm for Computing Binomial Coefficients Modulo Powers of Two

    PubMed Central

    2013-01-01

    I present a new algorithm for computing binomial coefficients modulo 2N. The proposed method has an O(N3 · Multiplication(N) + N4) preprocessing time, after which a binomial coefficient C(P, Q) with 0 ≤ Q ≤ P ≤ 2N − 1 can be computed modulo 2N in O(N2 · log(N) · Multiplication(N)) time. Multiplication(N) denotes the time complexity of multiplying two N-bit numbers, which can range from O(N2) to O(N · log(N) · log(log(N))) or better. Thus, the overall time complexity for evaluating M binomial coefficients C(P, Q) modulo 2N with 0 ≤ Q ≤ P ≤ 2N − 1 is O((N3 + M · N2 · log(N)) · Multiplication(N) + N4). After preprocessing, we can actually compute binomial coefficients modulo any 2R with R ≤ N. For larger values of P and Q, variations of Lucas' theorem must be used first in order to reduce the computation to the evaluation of multiple (O(log⁡(P))) binomial coefficients C(P′, Q′) (or restricted types of factorials P′!) modulo 2N with 0 ≤ Q′ ≤ P′ ≤ 2N − 1. PMID:24348186

  4. A fast reconstruction algorithm for bioluminescence tomography based on smoothed l0 norm regularization

    NASA Astrophysics Data System (ADS)

    He, Xiaowei; Yu, Jingjing; Geng, Guohua; Guo, Hongbo

    2013-10-01

    As an important optical molecular imaging technique, bioluminescence tomography (BLT) offers an inexpensive and sensitive means for non-invasively imaging a variety of physiological and pathological activities at cellular and molecular levels in living small animals. The key problem of BLT is to recover the distribution of the internal bioluminescence sources from limited measurements on the surface. Considering the sparsity of the light source distribution, we directly formulate the inverse problem of BLT into an l0-norm minimization model and present a smoothed l0-norm (SL0) based reconstruction algorithm. By approximating the discontinuous l0 norm with a suitable continuous function, the SL0 norm method solves the problem of intractable computational load of the minimal l0 search as well as high sensitivity of l0-norm to noise. Numerical experiments on a mouse atlas demonstrate that the proposed SL0 norm based reconstruction method can obtain whole domain reconstruction without any a priori knowledge of the source permissible region, yielding almost the same reconstruction results to those of l1 norm methods.

  5. A fast SCOP fold classification system using content-based E-Predict algorithm

    PubMed Central

    Chi, Pin-Hao; Shyu, Chi-Ren; Xu, Dong

    2006-01-01

    Background Domain experts manually construct the Structural Classification of Protein (SCOP) database to categorize and compare protein structures. Even though using the SCOP database is believed to be more reliable than classification results from other methods, it is labor intensive. To mimic human classification processes, we develop an automatic SCOP fold classification system to assign possible known SCOP folds and recognize novel folds for newly-discovered proteins. Results With a sufficient amount of ground truth data, our system is able to assign the known folds for newly-discovered proteins in the latest SCOP v1.69 release with 92.17% accuracy. Our system also recognizes the novel folds with 89.27% accuracy using 10 fold cross validation. The average response time for proteins with 500 and 1409 amino acids to complete the classification process is 4.1 and 17.4 seconds, respectively. By comparison with several structural alignment algorithms, our approach outperforms previous methods on both the classification accuracy and efficiency. Conclusion In this paper, we build an advanced, non-parametric classifier to accelerate the manual classification processes of SCOP. With satisfactory ground truth data from the SCOP database, our approach identifies relevant domain knowledge and yields reasonably accurate classifications. Our system is publicly accessible at . PMID:16872501

  6. Adaptive GDDA-BLAST: Fast and Efficient Algorithm for Protein Sequence Embedding

    PubMed Central

    Hong, Yoojin; Kang, Jaewoo; Lee, Dongwon; van Rossum, Damian B.

    2010-01-01

    A major computational challenge in the genomic era is annotating structure/function to the vast quantities of sequence information that is now available. This problem is illustrated by the fact that most proteins lack comprehensive annotations, even when experimental evidence exists. We previously theorized that embedded-alignment profiles (simply “alignment profiles” hereafter) provide a quantitative method that is capable of relating the structural and functional properties of proteins, as well as their evolutionary relationships. A key feature of alignment profiles lies in the interoperability of data format (e.g., alignment information, physio-chemical information, genomic information, etc.). Indeed, we have demonstrated that the Position Specific Scoring Matrices (PSSMs) are an informative M-dimension that is scored by quantitatively measuring the embedded or unmodified sequence alignments. Moreover, the information obtained from these alignments is informative, and remains so even in the “twilight zone” of sequence similarity (<25% identity) [1]–[5]. Although our previous embedding strategy was powerful, it suffered from contaminating alignments (embedded AND unmodified) and high computational costs. Herein, we describe the logic and algorithmic process for a heuristic embedding strategy named “Adaptive GDDA-BLAST.” Adaptive GDDA-BLAST is, on average, up to 19 times faster than, but has similar sensitivity to our previous method. Further, data are provided to demonstrate the benefits of embedded-alignment measurements in terms of detecting structural homology in highly divergent protein sequences and isolating secondary structural elements of transmembrane and ankyrin-repeat domains. Together, these advances allow further exploration of the embedded alignment data space within sufficiently large data sets to eventually induce relevant statistical inferences. We show that sequence embedding could serve as one of the vehicles for measurement of

  7. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  8. A Fast Algorithm for Automatic Detection of Ionospheric Disturbances Using GPS Slant Total Electron Content Data

    NASA Astrophysics Data System (ADS)

    Efendi, Emre; Arikan, Feza; Yarici, Aysenur

    2016-07-01

    Solar, geomagnetic, gravitational and seismic activities cause disturbances in the ionospheric region of upper atmosphere for space based communication, navigation and positioning systems. These disturbances can be categorized with respect to their amplitude, duration and frequency. Typically in the literature, ionospheric disturbances are investigated with gradient based methods on Total Electron Content (TEC) data estimated from ground based dual frequency Global Positioning System (GPS) receivers. In this study, a detection algorithm is developed to determine the variability in Slant TEC (STEC) data. The developed method, namely Differential Rate of TEC (DRoT), is based on Rate of Tec (RoT) method that is widely used in the literature. RoT is usually applied to Vertical TEC (VTEC) and it can be defined as normalized derivative of VTEC. Unfortunately, the resultant data obtained from the application of RoT on VTEC suffer from inaccuracies due to mapping function and the resultant values are very noisy which make it difficult to automatically detect the disturbance due to variability in the ionosphere. The developed DRoT method can be defined as the normalized metric norm (L2) between the RoT and its baseband trend structure. In this study, the error performance of DRoT is determined using synthetic data with variable bounds on the parameter set of amplitude, frequency and period of disturbance. It is observed that DRoT method can detect disturbances in three categories. For DRoT values less than 50%, there is no significant disturbance in STEC data. For DRoT values between 50 to 70 %, a medium scale disturbance can be observed. For DROT values over 70 %, severe disturbances such Large Scale Travelling Ionospheric Disturbances (TID) or plasma bubbles can be observed. When DRoT is applied to the GPS-STECdata for stations in high latitude, equatorial and mid-latitude regions, it is observed that disturbances with amplitudes larger than 10% of the difference between

  9. Permanent prostate implant using high activity seeds and inverse planning with fast simulated annealing algorithm: A 12-year Canadian experience

    SciTech Connect

    Martin, Andre-Guy; Roy, Jean; Beaulieu, Luc; Pouliot, Jean; Harel, Francois; Vigneault, Eric . E-mail: Eric.Vigneault@chuq.qc.ca

    2007-02-01

    Purpose: To report outcomes and toxicity of the first Canadian permanent prostate implant program. Methods and Materials: 396 consecutive patients (Gleason {<=}6, initial prostate specific antigen (PSA) {<=}10 and stage T1-T2a disease) were implanted between June 1994 and December 2001. The median follow-up is of 60 months (maximum, 136 months). All patients were planned with fast-simulated annealing inverse planning algorithm with high activity seeds ([gt] 0.76 U). Acute and late toxicity is reported for the first 213 patients using a modified RTOG toxicity scale. The Kaplan-Meier biochemical failure-free survival (bFFS) is reported according to the ASTRO and Houston definitions. Results: The bFFS at 60 months was of 88.5% (90.5%) according to the ASTRO (Houston) definition and, of 91.4% (94.6%) in the low risk group (initial PSA {<=}10 and Gleason {<=}6 and Stage {<=}T2a). Risk factors statistically associated with bFFS were: initial PSA >10, a Gleason score of 7-8, and stage T2b-T3. The mean D90 was of 151 {+-} 36.1 Gy. The mean V100 was of 85.4 {+-} 8.5% with a mean V150 of 60.1 {+-} 12.3%. Overall, the implants were well tolerated. In the first 6 months, 31.5% of the patients were free of genitourinary symptoms (GUs), 12.7% had Grade 3 GUs; 91.6% were free of gastrointestinal symptoms (GIs). After 6 months, 54.0% were GUs free, 1.4% had Grade 3 GUs; 95.8% were GIs free. Conclusion: The inverse planning with fast simulated annealing and high activity seeds gives a 5-year bFFS, which is comparable with the best published series with a low toxicity profile.

  10. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    SciTech Connect

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics. In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.

  11. The differential algebra based multiple level fast multipole algorithm for 3D space charge field calculation and photoemission simulation

    DOE PAGESBeta

    None, None

    2015-09-28

    Coulomb interaction between charged particles inside a bunch is one of the most importance collective effects in beam dynamics, becoming even more significant as the energy of the particle beam is lowered to accommodate analytical and low-Z material imaging purposes such as in the time resolved Ultrafast Electron Microscope (UEM) development currently underway at Michigan State University. In addition, space charge effects are the key limiting factor in the development of ultrafast atomic resolution electron imaging and diffraction technologies and are also correlated with an irreversible growth in rms beam emittance due to fluctuating components of the nonlinear electron dynamics.more » In the short pulse regime used in the UEM, space charge effects also lead to virtual cathode formation in which the negative charge of the electrons emitted at earlier times, combined with the attractive surface field, hinders further emission of particles and causes a degradation of the pulse properties. Space charge and virtual cathode effects and their remediation are core issues for the development of the next generation of high-brightness UEMs. Since the analytical models are only applicable for special cases, numerical simulations, in addition to experiments, are usually necessary to accurately understand the space charge effect. In this paper we will introduce a grid-free differential algebra based multiple level fast multipole algorithm, which calculates the 3D space charge field for n charged particles in arbitrary distribution with an efficiency of O(n), and the implementation of the algorithm to a simulation code for space charge dominated photoemission processes.« less

  12. Computation of scattering matrix elements of large and complex shaped absorbing particles with multilevel fast multipole algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Yueqian; Yang, Minglin; Sheng, Xinqing; Ren, Kuan Fang

    2015-05-01

    Light scattering properties of absorbing particles, such as the mineral dusts, attract a wide attention due to its importance in geophysical and environment researches. Due to the absorbing effect, light scattering properties of particles with absorption differ from those without absorption. Simple shaped absorbing particles such as spheres and spheroids have been well studied with different methods but little work on large complex shaped particles has been reported. In this paper, the surface Integral Equation (SIE) with Multilevel Fast Multipole Algorithm (MLFMA) is applied to study scattering properties of large non-spherical absorbing particles. SIEs are carefully discretized with piecewise linear basis functions on triangle patches to model whole surface of the particle, hence computation resource needs increase much more slowly with the particle size parameter than the volume discretized methods. To improve further its capability, MLFMA is well parallelized with Message Passing Interface (MPI) on distributed memory computer platform. Without loss of generality, we choose the computation of scattering matrix elements of absorbing dust particles as an example. The comparison of the scattering matrix elements computed by our method and the discrete dipole approximation method (DDA) for an ellipsoid dust particle shows that the precision of our method is very good. The scattering matrix elements of large ellipsoid dusts with different aspect ratios and size parameters are computed. To show the capability of the presented algorithm for complex shaped particles, scattering by asymmetry Chebyshev particle with size parameter larger than 600 of complex refractive index m = 1.555 + 0.004 i and different orientations are studied.

  13. A fast mode decision algorithm for multiview auto-stereoscopic 3D video coding based on mode and disparity statistic analysis

    NASA Astrophysics Data System (ADS)

    Ding, Cong; Sang, Xinzhu; Zhao, Tianqi; Yan, Binbin; Leng, Junmin; Yuan, Jinhui; Zhang, Ying

    2012-11-01

    Multiview video coding (MVC) is essential for applications of the auto-stereoscopic three-dimensional displays. However, the computational complexity of MVC encoders is tremendously huge. Fast algorithms are very desirable for the practical applications of MVC. Based on joint early termination , the selection of inter-view prediction and the optimization of the process of Inter8×8 modes by comparison, a fast macroblock(MB) mode selection algorithm is presented. Comparing with the full mode decision in MVC, the experimental results show that the proposed algorithm can reduce up to 78.13% on average and maximum 90.21% encoding time with a little increase in bit rates and loss in PSNR.

  14. EASY-GOING deconvolution: Combining accurate simulation and evolutionary algorithms for fast deconvolution of solid-state quadrupolar NMR spectra

    NASA Astrophysics Data System (ADS)

    Grimminck, Dennis L. A. G.; Polman, Ben J. W.; Kentgens, Arno P. M.; Leo Meerts, W.

    2011-08-01

    A fast and accurate fit program is presented for deconvolution of one-dimensional solid-state quadrupolar NMR spectra of powdered materials. Computational costs of the synthesis of theoretical spectra are reduced by the use of libraries containing simulated time/frequency domain data. These libraries are calculated once and with the use of second-party simulation software readily available in the NMR community, to ensure a maximum flexibility and accuracy with respect to experimental conditions. EASY-GOING deconvolution ( EGdeconv) is equipped with evolutionary algorithms that provide robust many-parameter fitting and offers efficient parallellised computing. The program supports quantification of relative chemical site abundances and (dis)order in the solid-state by incorporation of (extended) Czjzek and order parameter models. To illustrate EGdeconv's current capabilities, we provide three case studies. Given the program's simple concept it allows a straightforward extension to include other NMR interactions. The program is available as is for 64-bit Linux operating systems.

  15. Open-loop control of SCExAO's MEMS deformable mirror using the Fast Iterative Algorithm: speckle control performances

    NASA Astrophysics Data System (ADS)

    Blain, Célia; Guyon, Olivier; Martinache, Frantz; Bradley, Colin; Clergeon, Christophe

    2012-07-01

    Micro-Electro-Mechanical Systems (MEMS) deformable mirrors (DMs) are widely utilized in astronomical Adaptive Optics (AO) instrumentation. High precision open-loop control of MEMS DMs has been achieved by developing a high accuracy DM model, the Fast Iterative Algorithm (FIA), a physics-based model allowing precise control of the DM shape. Accurate open-loop control is particularly critical for the wavefront control of High- Contrast Imaging (HCI) instruments to create a dark hole area free of most slow and quasi-static speckles which remain the limiting factor for direct detection and imaging of exoplanets. The Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) system is one of these high contrast imaging instruments and uses a 1024-actuator MEMS deformable mirror (DM) both in closed-loop and open-loop. The DM is used to modulate speckles in order to distinguish (i) speckles due to static and slow-varying residual aberrations from (ii) speckles due to genuine structures, such as exoplanets. The FIA has been fully integrated into the SCExAO wavefront control software and we report the FIA’s performance for the control of speckles in the focal plane.

  16. Fast mode decision algorithm in MPEG-2 to H.264/AVC transcoding including group of picture structure conversion

    NASA Astrophysics Data System (ADS)

    Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang

    2009-05-01

    The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.

  17. High-performance combinatorial algorithms

    SciTech Connect

    Pinar, Ali

    2003-10-31

    Combinatorial algorithms have long played an important role in many applications of scientific computing such as sparse matrix computations and parallel computing. The growing importance of combinatorial algorithms in emerging applications like computational biology and scientific data mining calls for development of a high performance library for combinatorial algorithms. Building such a library requires a new structure for combinatorial algorithms research that enables fast implementation of new algorithms. We propose a structure for combinatorial algorithms research that mimics the research structure of numerical algorithms. Numerical algorithms research is nicely complemented with high performance libraries, and this can be attributed to the fact that there are only a small number of fundamental problems that underlie numerical solvers. Furthermore there are only a handful of kernels that enable implementation of algorithms for these fundamental problems. Building a similar structure for combinatorial algorithms will enable efficient implementations for existing algorithms and fast implementation of new algorithms. Our results will promote utilization of combinatorial techniques and will impact research in many scientific computing applications, some of which are listed.

  18. Algorithms for fast axisymmetric drop shape analysis measurements by a charge coupled device video camera and simulation procedure for test and evaluation

    NASA Astrophysics Data System (ADS)

    Busoni, Lorenzo; Carlà, Marcello; Lanzi, Leonardo

    2001-06-01

    A set of fast algorithms for axisymmetric drop shape analysis measurements is described. Speed has been improved by more than 1 order of magnitude over previously available procedures. Frame analysis is performed and drop characteristics and interfacial tension γ are computed in less than 40 ms on a Pentium III 450 MHz PC, while preserving an overall accuracy in Δγ/γ close to 1×10-4. A new procedure is described to evaluate both the algorithms performance and the contribution of each source of experimental error to the overall measurement accuracy.

  19. fast-matmul

    SciTech Connect

    Grey Ballard, Austin Benson

    2014-11-26

    This software provides implementations of fast matrix multiplication algorithms. These algorithms perform fewer floating point operations than the classical cubic algorithm. The software uses code generation to automatically implement the fast algorithms based on high-level descriptions. The code serves two general purposes. The first is to demonstrate that these fast algorithms can out-perform vendor matrix multiplication algorithms for modest problem sizes on a single machine. The second is to rapidly prototype many variations of fast matrix multiplication algorithms to encourage future research in this area. The implementations target sequential and shared memory parallel execution.

  20. A Fast and Portable Reimplementation of Piskunov and Valenti's Optimal-Extraction Algorithm with Improved Cosmic-Ray Removal and Optimal Sky Subtraction

    NASA Astrophysics Data System (ADS)

    Ritter, A.; Hyde, E. A.; Parker, Q. A.

    2014-02-01

    We present a fast and portable reimplementation of Piskunov and Valenti's optimal-extraction algorithm (Piskunov & Valenti 2002) in C/C++ together with full uncertainty propagation, improved cosmic-ray removal, and an optimal background-subtraction algorithm. This reimplementation can be used with IRAF and most existing data-reduction packages and leads to signal-to-noise ratios close to the Poisson limit. The algorithm is very stable, operates on spectra from a wide range of instruments (slit spectra and fibre feeds), and has been extensively tested for VLT/UVES, ESO/CES, ESO/FEROS, NTT/EMMI, NOT/ALFOSC, STELLA/SES, SSO/WiFeS, and finally, P60/SEDM-IFU data.

  1. fast-matmul

    2014-11-26

    This software provides implementations of fast matrix multiplication algorithms. These algorithms perform fewer floating point operations than the classical cubic algorithm. The software uses code generation to automatically implement the fast algorithms based on high-level descriptions. The code serves two general purposes. The first is to demonstrate that these fast algorithms can out-perform vendor matrix multiplication algorithms for modest problem sizes on a single machine. The second is to rapidly prototype many variations of fastmore » matrix multiplication algorithms to encourage future research in this area. The implementations target sequential and shared memory parallel execution.« less

  2. Photometric selection of quasars in large astronomical data sets with a fast and accurate machine learning algorithm

    NASA Astrophysics Data System (ADS)

    Gupta, Pramod; Connolly, Andrew J.; Gardner, Jeffrey P.

    2014-03-01

    Future astronomical surveys will produce data on ˜108 objects per night. In order to characterize and classify these sources, we will require algorithms that scale linearly with the size of the data, that can be easily parallelized and where the speedup of the parallel algorithm will be linear in the number of processing cores. In this paper, we present such an algorithm and apply it to the question of colour selection of quasars. We use non-parametric Bayesian classification and a binning algorithm implemented with hash tables (BASH tables). We show that this algorithm's run time scales linearly with the number of test set objects and is independent of the number of training set objects. We also show that it has the same classification accuracy as other algorithms. For current data set sizes, it is up to three orders of magnitude faster than commonly used naive kernel-density-estimation techniques and it is estimated to be about eight times faster than the current fastest algorithm using dual kd-trees for kernel density estimation. The BASH table algorithm scales linearly with the size of the test set data only, and so for future larger data sets, it will be even faster compared to other algorithms which all depend on the size of the test set and the size of the training set. Since it uses linear data structures, it is easier to parallelize compared to tree-based algorithms and its speedup is linear in the number of cores unlike tree-based algorithms whose speedup plateaus after a certain number of cores. Moreover, due to the use of hash tables to implement the binning, the memory usage is very small. While our analysis is for the specific problem of selection of quasars, the ideas are general and the BASH table algorithm can be applied to any density-estimation problem involving sparse high-dimensional data sets. Since sparse high-dimensional data sets are a common type of scientific data set, this method has the potential to be useful in a broad range of

  3. Fast adaptive OFDM-PON over single fiber loopback transmission using dynamic rate adaptation-based algorithm for channel performance improvement

    NASA Astrophysics Data System (ADS)

    Kartiwa, Iwa; Jung, Sang-Min; Hong, Moon-Ki; Han, Sang-Kook

    2014-03-01

    In this paper, we propose a novel fast adaptive approach that was applied to an OFDM-PON 20-km single fiber loopback transmission system to improve channel performance in term of stabilized BER below 2 × 10-3 and higher throughput beyond 10 Gb/s. The upstream transmission is performed through light source-seeded modulation using 1-GHz RSOA at the ONU. Experimental results indicated that the dynamic rate adaptation algorithm based on greedy Levin-Campello could be an effective solution to mitigate channel instability and data rate degradation caused by the Rayleigh back scattering effect and inefficient resource subcarrier allocation.

  4. Fast and optimal multiframe blind deconvolution algorithm for high-resolution ground-based imaging of space objects.

    PubMed

    Matson, Charles L; Borelli, Kathy; Jefferies, Stuart; Beckner, Charles C; Hege, E Keith; Lloyd-Hart, Michael

    2009-01-01

    We report a multiframe blind deconvolution algorithm that we have developed for imaging through the atmosphere. The algorithm has been parallelized to a significant degree for execution on high-performance computers, with an emphasis on distributed-memory systems so that it can be hosted on commodity clusters. As a result, image restorations can be obtained in seconds to minutes. We have compared and quantified the quality of its image restorations relative to the associated Cramér-Rao lower bounds (when they can be calculated). We describe the algorithm and its parallelization in detail, demonstrate the scalability of its parallelization across distributed-memory computer nodes, discuss the results of comparing sample variances of its output to the associated Cramér-Rao lower bounds, and present image restorations obtained by using data collected with ground-based telescopes. PMID:19107159

  5. Proof of uniform sampling of binary matrices with fixed row sums and column sums for the fast Curveball algorithm

    NASA Astrophysics Data System (ADS)

    Carstens, C. J.

    2015-04-01

    Randomization of binary matrices has become one of the most important quantitative tools in modern computational biology. The equivalent problem of generating random directed networks with fixed degree sequences has also attracted a lot of attention. However, it is very challenging to generate truly unbiased random matrices with fixed row and column sums. Strona et al. [Nat. Commun. 5, 4114 (2014), 10.1038/ncomms5114] introduce the innovative Curveball algorithm and give numerical support for the proposition that it generates truly random matrices. In this paper, we present a rigorous proof of convergence to the uniform distribution. Furthermore, we show the Curveball algorithm must include certain failed trades to ensure uniform sampling.

  6. A fast Monte Carlo EM algorithm for estimation in latent class model analysis with an application to assess diagnostic accuracy for cervical neoplasia in women with AGC

    PubMed Central

    Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan

    2013-01-01

    In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493

  7. Application of a fast and efficient algorithm to assess landslide-prone areas in sensitive clays in Sweden

    NASA Astrophysics Data System (ADS)

    Melchiorre, C.; Tryggvason, A.

    2015-12-01

    We refine and test an algorithm for landslide susceptibility assessment in areas with sensitive clays. The algorithm uses soil data and digital elevation models to identify areas which may be prone to landslides and has been applied in Sweden for several years. The algorithm is very computationally efficient and includes an intelligent filtering procedure for identifying and removing small-scale artifacts in the hazard maps produced. Where information on bedrock depth is available, this can be included in the analysis, as can information on several soil-type-based cross-sectional angle thresholds for slip. We evaluate how processing choices such as of filtering parameters, local cross-sectional angle thresholds, and inclusion of bedrock depth information affect model performance. The specific cross-sectional angle thresholds used were derived by analyzing the relationship between landslide scarps and the quick-clay susceptibility index (QCSI). We tested the algorithm in the Göta River valley. Several different verification measures were used to compare results with observed landslides and thereby identify the optimal algorithm parameters. Our results show that even though a relationship between the cross-sectional angle threshold and the QCSI could be established, no significant improvement of the overall modeling performance could be achieved by using these geographically specific, soil-based thresholds. Our results indicate that lowering the cross-sectional angle threshold from 1 : 10 (the general value used in Sweden) to 1 : 13 improves results slightly. We also show that an application of the automatic filtering procedure that removes areas initially classified as prone to landslides not only removes artifacts and makes the maps visually more appealing, but it also improves the model performance.

  8. Extended Vofire algorithm for fast transient fluid-structure dynamics with liquid-gas flows and interfaces

    NASA Astrophysics Data System (ADS)

    Faucher, Vincent; Kokh, Samuel

    2013-05-01

    The present paper is dedicated to the simulation of liquid-gas flows with interfaces in the framework of fast transient fluid-structure dynamics. The two-fluid interface is modelled as a discontinuity surface in the fluid property. We use an anti-dissipative Finite-Volume discretization strategy for unstructured meshes in order to capture the position of the interface within a thin diffused volume. This allows to control the numerical diffusion of the artificial mixing between components and provide an accurate capture of complex interface motions. This scheme is an extension of the Vofire numerical solver. We propose specific developments in order to handle flows that involved high density ratio between liquid and gas. The resulting scheme capabilities are validated on basic examples and also tested against large scale fluid-structure test derived from the MARA 10 experiment. All simulations are performed using EUROPLEXUS fast transient dynamics software.

  9. BRIDES: A New Fast Algorithm and Software for Characterizing Evolving Similarity Networks Using Breakthroughs, Roadblocks, Impasses, Detours, Equals and Shortcuts.

    PubMed

    Lord, Etienne; Le Cam, Margaux; Bapteste, Éric; Méheust, Raphaël; Makarenkov, Vladimir; Lapointe, François-Joseph

    2016-01-01

    Various types of genome and gene similarity networks along with their characteristics have been increasingly used for retracing different kinds of evolutionary and ecological relationships. Here, we present a new polynomial time algorithm and the corresponding software (BRIDES) to provide characterization of different types of paths existing in evolving (or augmented) similarity networks under the constraint that such paths contain at least one node that was not present in the original network. These different paths are denoted as Breakthroughs, Roadblocks, Impasses, Detours, Equal paths, and Shortcuts. The analysis of their distribution can allow discriminating among different evolutionary hypotheses concerning genomes or genes at hand. Our approach is based on an original application of the popular shortest path Dijkstra's and Yen's algorithms. The C++ and R versions of the BRIDES program are freely available at: https://github.com/etiennelord/BRIDES. PMID:27580188

  10. Genetic algorithm based fast alignment method for strap-down inertial navigation system with large azimuth misalignment.

    PubMed

    He, Hongyang; Xu, Jiangning; Qin, Fangjun; Li, Feng

    2015-11-01

    In order to shorten the alignment time and eliminate the small initial misalignment limit for compass alignment of strap-down inertial navigation system (SINS), which is sometimes not easy to satisfy when the ship is moored or anchored, an optimal model based time-varying parameter compass alignment algorithm is proposed in this paper. The contributions of the work presented here are twofold. First, the optimization of compass alignment parameters, which involves a lot of trial-and-error traditionally, is achieved based on genetic algorithm. On this basis, second, the optimal parameter varying model is established by least-square polynomial fitting. Experiments are performed with a navigational grade fiber optical gyroscope SINS, which validate the efficiency of the proposed method. PMID:26628165

  11. Genetic algorithm based fast alignment method for strap-down inertial navigation system with large azimuth misalignment

    NASA Astrophysics Data System (ADS)

    He, Hongyang; Xu, Jiangning; Qin, Fangjun; Li, Feng

    2015-11-01

    In order to shorten the alignment time and eliminate the small initial misalignment limit for compass alignment of strap-down inertial navigation system (SINS), which is sometimes not easy to satisfy when the ship is moored or anchored, an optimal model based time-varying parameter compass alignment algorithm is proposed in this paper. The contributions of the work presented here are twofold. First, the optimization of compass alignment parameters, which involves a lot of trial-and-error traditionally, is achieved based on genetic algorithm. On this basis, second, the optimal parameter varying model is established by least-square polynomial fitting. Experiments are performed with a navigational grade fiber optical gyroscope SINS, which validate the efficiency of the proposed method.

  12. Fast two-dimensional phase-unwrapping algorithm based on sorting by reliability following a noncontinuous path.

    PubMed

    Herráez, Miguel Arevallilo; Burton, David R; Lalor, Michael J; Gdeisat, Munther A

    2002-12-10

    We describe what is to our knowledge a novel technique for phase unwrapping. Several algorithms based on unwrapping the most-reliable pixels first have been proposed. These were restricted to continuous paths and were subject to difficulties in defining a starting pixel. The technique described here uses a different type of reliability function and does not follow a continuous path to perform the unwrapping operation. The technique is explained in detail and illustrated with a number of examples. PMID:12502301

  13. Diabetic foot-related problems: improving outcomes in the dialysis population using a foot assessment screening tri-algorithm (FAST).

    PubMed

    Crozier, Louise

    2014-01-01

    A comprehensive literature review was conducted to determine the effect of diabetic foot checks on patient awareness, satisfaction, and outcomes. An algorithm was developed based on evidence-based practice, best practice guidelines, and current literature that can be used by nurses and medical staff in the management of foot-related problems in patients with diabetes on dialysis. An educational resource guide was also developed for use when education is required for foot-related problems. PMID:25244893

  14. Simple, Fast and Accurate Implementation of the Diffusion Approximation Algorithm for Stochastic Ion Channels with Multiple States

    PubMed Central

    Orio, Patricio; Soudry, Daniel

    2012-01-01

    Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when

  15. Fast parallel molecular algorithms for DNA-based computation: solving the elliptic curve discrete logarithm problem over GF2.

    PubMed

    Li, Kenli; Zou, Shuting; Xv, Jin

    2008-01-01

    Elliptic curve cryptographic algorithms convert input data to unrecognizable encryption and the unrecognizable data back again into its original decrypted form. The security of this form of encryption hinges on the enormous difficulty that is required to solve the elliptic curve discrete logarithm problem (ECDLP), especially over GF(2(n)), n in Z+. This paper describes an effective method to find solutions to the ECDLP by means of a molecular computer. We propose that this research accomplishment would represent a breakthrough for applied biological computation and this paper demonstrates that in principle this is possible. Three DNA-based algorithms: a parallel adder, a parallel multiplier, and a parallel inverse over GF(2(n)) are described. The biological operation time of all of these algorithms is polynomial with respect to n. Considering this analysis, cryptography using a public key might be less secure. In this respect, a principal contribution of this paper is to provide enhanced evidence of the potential of molecular computing to tackle such ambitious computations. PMID:18431451

  16. A fast and explicit algorithm for simulating the dynamics of small dust grains with smoothed particle hydrodynamics

    NASA Astrophysics Data System (ADS)

    Price, Daniel J.; Laibe, Guillaume

    2015-07-01

    We describe a simple method for simulating the dynamics of small grains in a dusty gas, relevant to micron-sized grains in the interstellar medium and grains of centimetre size and smaller in protoplanetary discs. The method involves solving one extra diffusion equation for the dust fraction in addition to the usual equations of hydrodynamics. This `diffusion approximation for dust' is valid when the dust stopping time is smaller than the computational timestep. We present a numerical implementation using smoothed particle hydrodynamics that is conservative, accurate and fast. It does not require any implicit timestepping and can be straightforwardly ported into existing 3D codes.

  17. Fast imputation using medium- or low-coverage sequence data

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Direct imputation from raw sequence reads can be more accurate than calling genotypes first and then imputing, especially if read depth is low or error rates high, but different imputation strategies are required than those used for data from genotyping chips. A fast algorithm to impute from lower t...

  18. A fast vectorized multispin coding algorithm for 3D Monte Carlo simulations using Kawasaki spin-exchange dynamics

    NASA Astrophysics Data System (ADS)

    Zhang, M. Q.

    1989-09-01

    A new Monte Carlo algorithm for 3D Kawasaki spin-exchange simulations and its implementation on a CDC CYBER 205 is presented. This approach is applicable to lattices with sizes between 4×4×4 and 256×L2×L3 ((L2+2)(L3+4)/4⩽65535) and periodic boundary conditions. It is adjustable to various kinetic models in which the total magnetization is conserved. Maximum speed on 10 million steps per second can be reached for 3-D Ising model with Metropolis rate.

  19. A Fast Parallel Algorithm for Selected Inversion of Structured Sparse Matrices with Application to 2D Electronic Structure Calculations

    SciTech Connect

    Lin, Lin; Yang, Chao; Lu, Jiangfeng; Ying, Lexing; E, Weinan

    2009-09-25

    We present an efficient parallel algorithm and its implementation for computing the diagonal of $H^-1$ where $H$ is a 2D Kohn-Sham Hamiltonian discretized on a rectangular domain using a standard second order finite difference scheme. This type of calculation can be used to obtain an accurate approximation to the diagonal of a Fermi-Dirac function of $H$ through a recently developed pole-expansion technique \\cite{LinLuYingE2009}. The diagonal elements are needed in electronic structure calculations for quantum mechanical systems \\citeHohenbergKohn1964, KohnSham 1965,DreizlerGross1990. We show how elimination tree is used to organize the parallel computation and how synchronization overhead is reduced by passing data level by level along this tree using the technique of local buffers and relative indices. We analyze the performance of our implementation by examining its load balance and communication overhead. We show that our implementation exhibits an excellent weak scaling on a large-scale high performance distributed parallel machine. When compared with standard approach for evaluating the diagonal a Fermi-Dirac function of a Kohn-Sham Hamiltonian associated a 2D electron quantum dot, the new pole-expansion technique that uses our algorithm to compute the diagonal of $(H-z_i I)^-1$ for a small number of poles $z_i$ is much faster, especially when the quantum dot contains many electrons.

  20. Fast characterization of moment magnitude and focal mechanism in the context of tsunami warning in the NEAM region : W-phase and PDFM2 algorithms.

    NASA Astrophysics Data System (ADS)

    Schindelé, François; Roch, Julien; Duperray, Pierre; Reymond, Dominique

    2016-04-01

    Over past centuries, several large earthquakes (Mw ≥ 7.5) have been reported in the North East Atlantic and Mediterranenan sea (NEAM) region. Most of the tsunami potential seismic sources in the NEAM region, however, are in a magnitude range of 6.5 ≤ Mw ≤ 7.5 (e.g. tsunami triggered by the earthquake of Boumerdes in 2003 of Mw = 6.9). The CENALT (CENtre d'ALerte aux Tsunamis) in operation since 2012 as the French National Tsunami Warning Centre (NTWC) and Candidate Tsunami Service Provider (CTSP) has to issue warning messages within 15 minutes of the earthquake origin time. The warning level is currently based on a decision matrix depending on the magnitude, and the location of the hypocenter. Two seismic source inversion methods are implemented at CENALT: the W-phase algorithm, based on the so-called W-phase and PDFM2 algorithm , based on the surface waves and first P wave motions. They both give accurate moment magnitude and focal magnitude respectively in 10 min and 20 min. The results of the Mw magnitude, focal depth and type of fault (reverse, normal, strike-slip) are the most relevant parameters used to issue tsunami warnings. In this context, we assess the W-phase and PDFM2 methods with 29 events of magnitude Mw ≥ 5.8 for the period 2010-2015 in the NEAM region. Results with 10 and 20 min for the W-phase algorithm and with 20 and 30 min for the PDFM2 algorithm are compared to the Global Centroid Moment Tensor catalog. The W-phase and PDFM2 methods gives accurate results respectively in 10 min and 20 min. This work is funded by project ASTARTE -- Assessment, Strategy And Risk Reduction for Tsunamis in Europe - FP7-ENV2013 6.4-3, Grant 603839

  1. Fast chromatographic method for the determination of dyes in beverages by using high performance liquid chromatography--diode array detection data and second order algorithms.

    PubMed

    Culzoni, María J; Schenone, Agustina V; Llamas, Natalia E; Garrido, Mariano; Di Nezio, Maria S; Band, Beatriz S Fernández; Goicoechea, Héctor C

    2009-10-16

    A fast chromatographic methodology is presented for the analysis of three synthetic dyes in non-alcoholic beverages: amaranth (E123), sunset yellow FCF (E110) and tartrazine (E102). Seven soft drinks (purchased from a local supermarket) were homogenized, filtered and injected into the chromatographic system. Second order data were obtained by a rapid LC separation and DAD detection. A comparative study of the performance of two second order algorithms (MCR-ALS and U-PLS/RBL) applied to model the data, is presented. Interestingly, the data present time shift between different chromatograms and cannot be conveniently corrected to determine the above-mentioned dyes in beverage samples. This fact originates the lack of trilinearity that cannot be conveniently pre-processed and can hardly be modelled by using U-PLS/RBL algorithm. On the contrary, MCR-ALS has shown to be an excellent tool for modelling this kind of data allowing to reach acceptable figures of merit. Recovery values ranged between 97% and 105% when analyzing artificial and real samples were indicative of the good performance of the method. In contrast with the complete separation, which consumes 10 mL of methanol and 3 mL of 0.08 mol L(-1) ammonium acetate, the proposed fast chromatography method requires only 0.46 mL of methanol and 1.54 mL of 0.08 mol L(-1) ammonium acetate. Consequently, analysis time could be reduced up to 14.2% of the necessary time to perform the complete separation allowing saving both solvents and time, which are related to a reduction of both the costs per analysis and environmental impact. PMID:19748097

  2. KID - an algorithm for fast and efficient text mining used to automatically generate a database containing kinetic information of enzymes

    PubMed Central

    2010-01-01

    Background The amount of available biological information is rapidly increasing and the focus of biological research has moved from single components to networks and even larger projects aiming at the analysis, modelling and simulation of biological networks as well as large scale comparison of cellular properties. It is therefore essential that biological knowledge is easily accessible. However, most information is contained in the written literature in an unstructured way, so that methods for the systematic extraction of knowledge directly from the primary literature have to be deployed. Description Here we present a text mining algorithm for the extraction of kinetic information such as KM, Ki, kcat etc. as well as associated information such as enzyme names, EC numbers, ligands, organisms, localisations, pH and temperatures. Using this rule- and dictionary-based approach, it was possible to extract 514,394 kinetic parameters of 13 categories (KM, Ki, kcat, kcat/KM, Vmax, IC50, S0.5, Kd, Ka, t1/2, pI, nH, specific activity, Vmax/KM) from about 17 million PubMed abstracts and combine them with other data in the abstract. A manual verification of approx. 1,000 randomly chosen results yielded a recall between 51% and 84% and a precision ranging from 55% to 96%, depending of the category searched. The results were stored in a database and are available as "KID the KInetic Database" via the internet. Conclusions The presented algorithm delivers a considerable amount of information and therefore may aid to accelerate the research and the automated analysis required for today's systems biology approaches. The database obtained by analysing PubMed abstracts may be a valuable help in the field of chemical and biological kinetics. It is completely based upon text mining and therefore complements manually curated databases. The database is available at http://kid.tu-bs.de. The source code of the algorithm is provided under the GNU General Public Licence and available on

  3. In vitro estimation of fast and slow wave parameters of thin trabecular bone using space-alternating generalized expectation-maximization algorithm.

    PubMed

    Grimes, Morad; Bouhadjera, Abdelmalek; Haddad, Sofiane; Benkedidah, Toufik

    2012-07-01

    In testing cancellous bone using ultrasound, two types of longitudinal Biot's waves are observed in the received signal. These are known as fast and slow waves and their appearance depend on the alignment of bone trabeculae in the propagation path and the thickness of the specimen under test (SUT). They can be used as an effective tool for the diagnosis of osteoporosis because wave propagation behavior depends on the bone structure. However, the identification of these waves in the received signal can be difficult to achieve. In this study, ultrasonic wave propagation in a 4mm thick bovine cancellous bone in the direction parallel to the trabecular alignment is considered. The observed Biot's fast and slow longitudinal waves are superimposed; which makes it difficult to extract any information from the received signal. These two waves can be separated using the space alternating generalized expectation maximization (SAGE) algorithm. The latter has been used mainly in speech processing. In this new approach, parameters such as, arrival time, center frequency, bandwidth, amplitude, phase and velocity of each wave are estimated. The B-Scan images and its associated A-scans obtained through simulations using Biot's finite-difference time-domain (FDTD) method are validated experimentally using a thin bone sample obtained from the femoral-head of a 30 months old bovine. PMID:22284937

  4. Solving the chemical master equation by a fast adaptive finite state projection based on the stochastic simulation algorithm.

    PubMed

    Sidje, R B; Vo, H D

    2015-11-01

    The mathematical framework of the chemical master equation (CME) uses a Markov chain to model the biochemical reactions that are taking place within a biological cell. Computing the transient probability distribution of this Markov chain allows us to track the composition of molecules inside the cell over time, with important practical applications in a number of areas such as molecular biology or medicine. However the CME is typically difficult to solve, since the state space involved can be very large or even countably infinite. We present a novel way of using the stochastic simulation algorithm (SSA) to reduce the size of the finite state projection (FSP) method. Numerical experiments that demonstrate the effectiveness of the reduction are included. PMID:26319118

  5. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.

  6. ICA analysis of fMRI with real-time constraints: an evaluation of fast detection performance as function of algorithms, parameters and a priori conditions

    PubMed Central

    Soldati, Nicola; Calhoun, Vince D.; Bruzzone, Lorenzo; Jovicich, Jorge

    2013-01-01

    Independent component analysis (ICA) techniques offer a data-driven possibility to analyze brain functional MRI data in real-time. Typical ICA methods used in functional magnetic resonance imaging (fMRI), however, have been until now mostly developed and optimized for the off-line case in which all data is available. Real-time experiments are ill-posed for ICA in that several constraints are added: limited data, limited analysis time and dynamic changes in the data and computational speed. Previous studies have shown that particular choices of ICA parameters can be used to monitor real-time fMRI (rt-fMRI) brain activation, but it is unknown how other choices would perform. In this rt-fMRI simulation study we investigate and compare the performance of 14 different publicly available ICA algorithms systematically sampling different growing window lengths (WLs), model order (MO) as well as a priori conditions (none, spatial or temporal). Performance is evaluated by computing the spatial and temporal correlation to a target component as well as computation time. Four algorithms are identified as best performing (constrained ICA, fastICA, amuse, and evd), with their corresponding parameter choices. Both spatial and temporal priors are found to provide equal or improved performances in similarity to the target compared with their off-line counterpart, with greatly reduced computation costs. This study suggests parameter choices that can be further investigated in a sliding-window approach for a rt-fMRI experiment. PMID:23378835

  7. Fast Simulation of 3-D Surface Flanging and Prediction of the Flanging Lines Based On One-Step Inverse Forming Algorithm

    SciTech Connect

    Bao Yidong; Hu Sibo; Lang Zhikui; Hu Ping

    2005-08-05

    A fast simulation scheme for 3D curved binder flanging and blank shape prediction of sheet metal based on one-step inverse finite element method is proposed, in which the total plasticity theory and proportional loading assumption are used. The scheme can be actually used to simulate 3D flanging with complex curve binder shape, and suitable for simulating any type of flanging model by numerically determining the flanging height and flanging lines. Compared with other methods such as analytic algorithm and blank sheet-cut return method, the prominent advantage of the present scheme is that it can directly predict the location of the 3D flanging lines when simulating the flanging process. Therefore, the prediction time of flanging lines will be obviously decreased. Two typical 3D curve binder flanging including stretch and shrink characters are simulated in the same time by using the present scheme and incremental FE non-inverse algorithm based on incremental plasticity theory, which show the validity and high efficiency of the present scheme.

  8. Learning Maximal Entropy Models from finite size datasets: a fast Data-Driven algorithm allows to sample from the posterior distribution

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse

    A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).

  9. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    NASA Technical Reports Server (NTRS)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  10. Learning maximum entropy models from finite-size data sets: A fast data-driven algorithm allows sampling from the posterior distribution.

    PubMed

    Ferrari, Ulisse

    2016-08-01

    Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. PMID:27627406

  11. Research on a kind of high precision and fast signal processing algorithm for FM/CW laser radar

    NASA Astrophysics Data System (ADS)

    Xu, Xinke; Liu, Guodong; Chen, Fengdong; Liu, Bingguo; Zhuang, Zhitao; Lu, Cheng; Gan, Yu

    2014-12-01

    Range accuracy and efficiency are two important indicators for Frequency modulated continuous wave (FM/CW) laser radar, improving the accuracy and efficiency of extracting beat frequency are key factors for them. Multiple Modulation Zoom Spectrum Analysis (ZFFT) and the Chirp-Z Transform (CZT) are two widely used methods for improving frequency estimation. The paper through analyze advantages and disadvantages of these methods, proposes a high accuracy and fast signal processing method which is ZFFT-CZT, it combines advantages that ZFFT can reduce data size, and CZT can zoom in frequency of any interested band. The processing of ZFFT-CZT is following: firstly ZFFT is conducted by conducting Fourier transform on short time signal to calculate amount of frequency shift, and transforming high-frequency signal into low-frequency signal of long time sampling, then CZT is conducted by choosing any interested band to continue subdividing the spectral peaks, which can reduce picket fence effect. By simulate experiment based on ZFFT-CZT method, two closed targets at distance of 50m and 50.001m are measured, and the measurement errors are 40μm and 34μm respectively. It proved that ZFFT-CZT has a small amount of calculation, which can meet the requirement of high precision frequency extraction.

  12. Comment on "Replica-exchange-with-tunneling for fast exploration of protein landscapes" [J. Chem. Phys. 143, 224102 (2015)

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun

    2016-08-01

    In "Replica-exchange-with-tunneling for fast exploration of protein landscapes" [F. Yaşar et al., J. Chem. Phys. 143, 224102 (2015)], a novel sampling algorithm called "Replica Exchange with Tunneling" was proposed. However, due to its violation of the detailed balance, the algorithm fails to sample from the correct canonical ensemble.

  13. A fast algorithm for Direct Numerical Simulation of natural convection flows in arbitrarily-shaped periodic domains

    NASA Astrophysics Data System (ADS)

    Angeli, D.; Stalio, E.; Corticelli, M. A.; Barozzi, G. S.

    2015-11-01

    A parallel algorithm is presented for the Direct Numerical Simulation of buoyancy- induced flows in open or partially confined periodic domains, containing immersed cylindrical bodies of arbitrary cross-section. The governing equations are discretized by means of the Finite Volume method on Cartesian grids. A semi-implicit scheme is employed for the diffusive terms, which are treated implicitly on the periodic plane and explicitly along the homogeneous direction, while all convective terms are explicit, via the second-order Adams-Bashfort scheme. The contemporary solution of velocity and pressure fields is achieved by means of a projection method. The numerical resolution of the set of linear equations resulting from discretization is carried out by means of efficient and highly parallel direct solvers. Verification and validation of the numerical procedure is reported in the paper, for the case of flow around an array of heated cylindrical rods arranged in a square lattice. Grid independence is assessed in laminar flow conditions, and DNS results in turbulent conditions are presented for two different grids and compared to available literature data, thus confirming the favorable qualities of the method.

  14. A fast algorithm for parabolic PDE-based inverse problems based on Laplace transforms and flexible Krylov solvers

    SciTech Connect

    Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.

    2015-10-15

    We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.

  15. A fast method for video deblurring based on a combination of gradient methods and denoising algorithms in Matlab and C environments

    NASA Astrophysics Data System (ADS)

    Mirzadeh, Zeynab; Mehri, Razieh; Rabbani, Hossein

    2010-01-01

    In this paper the degraded video with blur and noise is enhanced by using an algorithm based on an iterative procedure. In this algorithm at first we estimate the clean data and blur function using Newton optimization method and then the estimation procedure is improved using appropriate denoising methods. These noise reduction techniques are based on local statistics of clean data and blur function. For estimated blur function we use LPA-ICI (local polynomial approximation - intersection of confidence intervals) method that use an anisotropic window around each point and obtain the enhanced data employing Wiener filter in this local window. Similarly, to improvement the quality of estimated clean video, at first we transform the data to wavelet transform domain and then improve our estimation using maximum a posterior (MAP) estimator and local Laplace prior. This procedure (initial estimation and improvement of estimation by denoising) is iterated and finally the clean video is obtained. The implementation of this algorithm is slow in MATLAB1 environment and so it is not suitable for online applications. However, MATLAB has the capability of running functions written in C. The files which hold the source for these functions are called MEX-Files. The MEX functions allow system-specific APIs to be called to extend MATLAB's abilities. So, in this paper to speed up our algorithm, the written code in MATLAB is sectioned and the elapsed time for each section is measured and slow sections (that use 60% of complete running time) are selected. Then these slow sections are translated to C++ and linked to MATLAB. In fact, the high loads of information in images and processed data in the "for" loops of relevant code, makes MATLAB an unsuitable candidate for writing such programs. The written code for our video deblurring algorithm in MATLAB contains eight "for" loops. These eighth "for" utilize 60% of the total execution time of the entire program and so the runtime should be

  16. Fast separable nonlocal means

    NASA Astrophysics Data System (ADS)

    Ghosh, Sanjay; Chaudhury, Kunal N.

    2016-03-01

    We propose a simple and fast algorithm called PatchLift for computing distances between patches (contiguous block of samples) extracted from a given one-dimensional signal. PatchLift is based on the observation that the patch distances can be efficiently computed from a matrix that is derived from the one-dimensional signal using lifting; importantly, the number of operations required to compute the patch distances using this approach does not scale with the patch length. We next demonstrate how PatchLift can be used for patch-based denoising of images corrupted with Gaussian noise. In particular, we propose a separable formulation of the classical nonlocal means (NLM) algorithm that can be implemented using PatchLift. We demonstrate that the PatchLift-based implementation of separable NLM is a few orders faster than standard NLM and is competitive with existing fast implementations of NLM. Moreover, its denoising performance is shown to be consistently superior to that of NLM and some of its variants, both in terms of peak signal-to-noise ratio/structural similarity index and visual quality.

  17. Fast voxel-based 2D/3D registration algorithm using a volume rendering method based on the shear-warp factorization

    NASA Astrophysics Data System (ADS)

    Weese, Juergen; Goecke, Roland; Penney, Graeme P.; Desmedt, Paul; Buzug, Thorsten M.; Schumann, Heidrun

    1999-05-01

    2D/3D registration makes it possible to use pre-operative CT scans for navigation purposes during X-ray fluoroscopy guided interventions. We present a fast voxel-based method for this registration task, which uses a recently introduced similarity measure (pattern intensity). This measure is especially suitable for 2D/3D registration, because it is robust with respect to structures such as a stent visible in the X-ray fluoroscopy image but not in the CT scan. The method uses only a part of the CT scan for the generation of digitally reconstructed radiographs (DRRs) to accelerate their computation. Nevertheless, computation time is crucial for intra-operative application and a further speed-up is required, because numerous DRRs must be computed. For that reason, the suitability of different volume rendering methods for 2D/3D registration has been investigated. A method based on the shear-warp factorization of the viewing transformation turned out to be especially suitable and builds the basis of the registration algorithm. The algorithm has been applied to images of a spine phantom and to clinical images. For comparison, registration results have been calculated using ray-casting. The shear-warp factorization based rendering method accelerates registration by a factor of up to seven compared to ray-casting without degrading registration accuracy. Using a vertebra as feature for registration, computation time is in the range of 3-4s (Sun UltraSparc, 300 MHz) which is acceptable for intra-operative application.

  18. Faster Algorithms on Branch and Clique Decompositions

    NASA Astrophysics Data System (ADS)

    Bodlaender, Hans L.; van Leeuwen, Erik Jan; van Rooij, Johan M. M.; Vatshelle, Martin

    We combine two techniques recently introduced to obtain faster dynamic programming algorithms for optimization problems on graph decompositions. The unification of generalized fast subset convolution and fast matrix multiplication yields significant improvements to the running time of previous algorithms for several optimization problems. As an example, we give an O^{*}(3^{ω/2k}) time algorithm for Minimum Dominating Set on graphs of branchwidth k, improving on the previous O *(4 k ) algorithm. Here ω is the exponent in the running time of the best matrix multiplication algorithm (currently ω< 2.376). For graphs of cliquewidth k, we improve from O *(8 k ) to O *(4 k ). We also obtain an algorithm for counting the number of perfect matchings of a graph, given a branch decomposition of width k, that runs in time O^{*}(2^{ω/2k}). Generalizing these approaches, we obtain faster algorithms for all so-called [ρ,σ]-domination problems on branch decompositions if ρ and σ are finite or cofinite. The algorithms presented in this paper either attain or are very close to natural lower bounds for these problems.

  19. Fast Decision Algorithms in Low-Power Embedded Processors for Quality-of-Service Based Connectivity of Mobile Sensors in Heterogeneous Wireless Sensor Networks

    PubMed Central

    Jaraíz-Simón, María D.; Gómez-Pulido, Juan A.; Vega-Rodríguez, Miguel A.; Sánchez-Pérez, Juan M.

    2012-01-01

    When a mobile wireless sensor is moving along heterogeneous wireless sensor networks, it can be under the coverage of more than one network many times. In these situations, the Vertical Handoff process can happen, where the mobile sensor decides to change its connection from a network to the best network among the available ones according to their quality of service characteristics. A fitness function is used for the handoff decision, being desirable to minimize it. This is an optimization problem which consists of the adjustment of a set of weights for the quality of service. Solving this problem efficiently is relevant to heterogeneous wireless sensor networks in many advanced applications. Numerous works can be found in the literature dealing with the vertical handoff decision, although they all suffer from the same shortfall: a non-comparable efficiency. Therefore, the aim of this work is twofold: first, to develop a fast decision algorithm that explores the entire space of possible combinations of weights, searching that one that minimizes the fitness function; and second, to design and implement a system on chip architecture based on reconfigurable hardware and embedded processors to achieve several goals necessary for competitive mobile terminals: good performance, low power consumption, low economic cost, and small area integration. PMID:22438728

  20. Surface morphology characterization of pentacene thin film and its substrate with under-layers by power spectral density using fast Fourier transform algorithms

    NASA Astrophysics Data System (ADS)

    Itoh, Taketsugu; Yamauchi, Noriyoshi

    2007-05-01

    Surface morphology of pentacene thin films and their substrates with under-layers is characterized by using atomic force microscopy (AFM). The power values of power spectral density (PSD) for the AFM digital data were determined by the fast Fourier transform (FFT) algorithms instead of the root-mean-square (rms) and peak-to-valley value. The PSD plots of pentacene films on glass substrate are successfully approximated by the k-correlation model. The pentacene film growth is interpreted the intermediation of the bulk and surface diffusion by parameter C of k-correlation model. The PSD plots of pentacene film on Au under-layer is approximated by using the linear continuum model (LCM) instead of the combination model of the k-correlation model and Gaussian function. The PSD plots of SiO 2 layer on Au under-layer as a gate insulator on a gate electrode of organic thin film transistors (OTFTs) have three power values of PSD. It is interpreted that the specific three PSD power values are caused by the planarization of the smooth SiO 2 layer to rough Au under-layer.

  1. Quantum algorithms

    NASA Astrophysics Data System (ADS)

    Abrams, Daniel S.

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  2. Fast Edge-Searching and Related Problems

    NASA Astrophysics Data System (ADS)

    Yang, Boting

    Given a graph G = (V,E) in which a fugitive hides on vertices or along edges, graph searching problems are usually to find the minimum number of searchers required to capture the fugitive. In this paper, we consider the problem of finding the minimum number of steps to capture the fugitive. We introduce the fast edge-searching problem in the edge search model, which is the problem of finding the minimum number of steps (called the fast edge-search time) to capture the fugitive. We establish relations between the fast edge-search time and the fast search number. While the family of graphs whose fast search number is at most k is not minor-closed for any positive integer k ≥ 2, we show that the family of graphs whose fast edge-search time is at most k is minor-closed. We establish relations between the fast (edge-)searching and the node searching. These relations allow us to transform the problem of computing node search numbers to the problem of computing fast edge-search time or fast search numbers. Using these relations, we prove that the problem of deciding, given a graph G and an integer k, whether the fast (edge-)search number of G is less than or equal to k is NP-complete; and it remains NP-complete for Eulerian graphs. We also prove that the problem of determining whether the fast (edge-)search number of G is a half of the number of odd vertices in G is NP-complete; and it remains NP-complete for planar graphs with maximum degree 4. We present a linear time approximation algorithm for the fast edge-search time that always delivers solutions of at most (1+|V|-1/|E|+1) times the optimal value. This algorithm also gives us a tight upper bound on the fast search number of the graph. We also show a lower bound on the fast search number using the minimum degree and the number of odd vertices.

  3. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  4. Calling All Trainers.

    ERIC Educational Resources Information Center

    Carolan, Mary D.; Doyle, John C.

    1998-01-01

    Describes how to establish and operate a call center that handles customer service, telemarketing, collections, and other customer-focused areas. Discusses the advantages of a call center, the new opportunities that will arise as a result of emerging technologies, and the challenges of recruiting, training, and retaining personnel. (JOW)

  5. Fast iterative reconstructions for animal CT

    NASA Astrophysics Data System (ADS)

    Huang, H.-M.; Hsiao, I.-T.; Jan, M.-L.

    2009-06-01

    For iterative x-ray computed tomography (CT) reconstruction, the convex algorithm combined with ordered subset (OSC) [1] is a relatively fast algorithm and has shown its potential for low-dose situations. But it needs one forward projection and two backprojections per iteration. Unlike convex algorithm, the gradient algorithm only requires one forward projection and one backprojection per iteration. Here, we applied ordered subsets of projection data to a modified gradient algorithm. In order to further reduce computation time, the new algorithm, the ordered subset gradient (OSG) algorithm, can be adjusted with a step size. We also implemented another OS-type algorithm called OSTR. The OSG algorithm is compared with OSC algorithm and OSTR algorithm using three-dimensional simulated helical cone-beam CT data. The performance is evaluated in terms of log-likelihood, contrast recovery, and bias-variance studies. Results show that images of OSG has compatible visual image quality to those of OSC and OSTR, but in the resolution and bias-variance studies, OSG seems to reach stable values with faster speed. In particular, OSTR has better recovery in a smoother region, but both OSG and OSC have better recovery in the high-frequency regions. Moreover, in terms of log likelihood with respect to computation time, OSG has faster convergence rate than that of OSC and similar to that of OSTR. We conclude that OSG has potential to provide comparable image quality and is more computationally efficient, and thus could be suitable for low-dose, helical cone-beam CT image reconstruction.

  6. Spectrographic phase-retrieval algorithm for femtosecond and attosecond pulses with frequency gaps

    NASA Astrophysics Data System (ADS)

    Seifert, B.; Wallentowitz, S.; Volkmann, U.; Hause, A.; Sperlich, K.; Stolz, H.

    2014-10-01

    We present a phase-reconstruction algorithm for a self-referenced spectrographic pulse characterization technique called “very advanced method for phase and intensity retrieval of e-fields” (VAMPIRE). This technique permits a spectral phase reconstruction of pulses with separated frequency components. The algorithm uses the particular characteristics of VAMPIRE spectrograms. It is a locally structured algorithm which is fast, robust, and it allows us to master stagnation problems. The algorithm is tested by use of both simulated and measured data.

  7. A scalable and practical one-pass clustering algorithm for recommender system

    NASA Astrophysics Data System (ADS)

    Khalid, Asra; Ghazanfar, Mustansar Ali; Azam, Awais; Alahmari, Saad Ali

    2015-12-01

    KMeans clustering-based recommendation algorithms have been proposed claiming to increase the scalability of recommender systems. One potential drawback of these algorithms is that they perform training offline and hence cannot accommodate the incremental updates with the arrival of new data, making them unsuitable for the dynamic environments. From this line of research, a new clustering algorithm called One-Pass is proposed, which is a simple, fast, and accurate. We show empirically that the proposed algorithm outperforms K-Means in terms of recommendation and training time while maintaining a good level of accuracy.

  8. Weighted MinMax Algorithm for Color Image Quantization

    NASA Technical Reports Server (NTRS)

    Reitan, Paula J.

    1999-01-01

    The maximum intercluster distance and the maximum quantization error that are minimized by the MinMax algorithm are shown to be inappropriate error measures for color image quantization. A fast and effective (improves image quality) method for generalizing activity weighting to any histogram-based color quantization algorithm is presented. A new non-hierarchical color quantization technique called weighted MinMax that is a hybrid between the MinMax and Linde-Buzo-Gray (LBG) algorithms is also described. The weighted MinMax algorithm incorporates activity weighting and seeks to minimize WRMSE, whereby obtaining high quality quantized images with significantly less visual distortion than the MinMax algorithm.

  9. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  10. Artificial Intelligence and CALL.

    ERIC Educational Resources Information Center

    Underwood, John H.

    The potential application of artificial intelligence (AI) to computer-assisted language learning (CALL) is explored. Two areas of AI that hold particular interest to those who deal with language meaning--knowledge representation and expert systems, and natural-language processing--are described and examples of each are presented. AI contribution…

  11. Wake-Up Call.

    ERIC Educational Resources Information Center

    Sartorius, Tara Cady

    2002-01-01

    Focuses on the artist, Laquita Thomson, whose inspiration are the stars and space. Discusses her series called, "Celestial Happenings: Stars Fell on Alabama." Describes one event that inspired an art work when a meteor crashed into an Alabama home. Includes lessons for various subject areas. (CMK)

  12. When Crises Call

    ERIC Educational Resources Information Center

    Kisch, Marian

    2012-01-01

    Natural disasters, as well as crises of the man-made variety, call on leaders of school districts to manage scenarios impossible to predict and for which no amount of training can adequately prepare. One thing all major crises hold in common is their far-reaching effects, which can run the gamut from personal safety and mental well-being to the…

  13. Fast and accurate metrology of multi-layered ceramic materials by an automated boundary detection algorithm developed for optical coherence tomography data

    PubMed Central

    Ekberg, Peter; Su, Rong; Chang, Ernest W.; Yun, Seok Hyun; Mattsson, Lars

    2014-01-01

    Optical coherence tomography (OCT) is useful for materials defect analysis and inspection with the additional possibility of quantitative dimensional metrology. Here, we present an automated image-processing algorithm for OCT analysis of roll-to-roll multilayers in 3D manufacturing of advanced ceramics. It has the advantage of avoiding filtering and preset modeling, and will, thus, introduce a simplification. The algorithm is validated for its capability of measuring the thickness of ceramic layers, extracting the boundaries of embedded features with irregular shapes, and detecting the geometric deformations. The accuracy of the algorithm is very high, and the reliability is better than 1 µm when evaluating with the OCT images using the same gauge block step height reference. The method may be suitable for industrial applications to the rapid inspection of manufactured samples with high accuracy and robustness. PMID:24562018

  14. A fast semi-implicit algorithm for problems of mixed type. [initial-boundary value problems modeled by partial differential equations

    NASA Technical Reports Server (NTRS)

    Frederickson, P. O.; Wessel, W. R.

    1979-01-01

    Certain physical processes are modeled by partial differential equations which are parabolic over part of the domain and elliptic over the remainder. A family of semi-implicit algorithms which are well suited to initial-boundary value problems of this mixed type is discussed. One important feature of these algorithms is the use of an approximate inverse for the solution of the implicit linear system. A strong error analysis results in an estimate of the total error as a function of approximate inverse error e and time step h.

  15. Call it Worm Sleep.

    PubMed

    Trojanowski, Nicholas F; Raizen, David M

    2016-02-01

    The nematode Caenorhabditis elegans stops feeding and moving during a larval transition stage called lethargus and following exposure to cellular stressors. These behaviors have been termed 'sleep-like states'. We argue that these behaviors should instead be called sleep. Sleep during lethargus is similar to sleep regulated by circadian timers in insects and mammals, and sleep in response to cellular stress is similar to sleep induced by sickness in other animals. Sleep in mammals and Drosophila shows molecular and functional conservation with C. elegans sleep. The simple neuroanatomy and powerful genetic tools of C. elegans have yielded insights into sleep regulation and hold great promise for future research into sleep regulation and function. PMID:26747654

  16. Just call it "treatment".

    PubMed

    Friedmann, Peter D; Schwartz, Robert P

    2012-01-01

    Although many in the addiction treatment field use the term "medication-assisted treatment" to describe a combination of pharmacotherapy and counseling to address substance dependence, research has demonstrated that opioid agonist treatment alone is effective in patients with opioid dependence, regardless of whether they receive counseling. The time has come to call pharmacotherapy for such patients just "treatment". An explicit acknowledgment that medication is an essential first-line component in the successful management of opioid dependence. PMID:23186149

  17. Scaling of echolocation call parameters in bats.

    PubMed

    Jones, G

    1999-12-01

    I investigated the scaling of echolocation call parameters (frequency, duration and repetition rate) in bats in a functional context. Low-duty-cycle bats operate with search phase cycles of usually less than 20 %. They process echoes in the time domain and are therefore intolerant of pulse-echo overlap. High-duty-cycle (>30 %) species use Doppler shift compensation, and they separate pulse and echo in the frequency domain. Call frequency scales negatively with body mass in at least five bat families. Pulse duration scales positively with mass in low-duty-cycle quasi-constant-frequency (QCF) species because the large aerial-hawking species that emit these signals fly fast in open habitats. They therefore detect distant targets and experience pulse-echo overlap later than do smaller bats. Pulse duration also scales positively with mass in the Hipposideridae, which show at least partial Doppler shift compensation. Pulse repetition rate corresponds closely with wingbeat frequency in QCF bat species that fly relatively slowly. Larger, fast-flying species often skip pulses when detecting distant targets. There is probably a trade-off between call intensity and repetition rate because 'whispering' bats (and hipposiderids) produce several calls per predicted wingbeat and because batches of calls are emitted per wingbeat during terminal buzzes. Severe atmospheric attenuation at high frequencies limits the range of high-frequency calls. Low-duty-cycle bats that call at high frequencies must therefore use short pulses to avoid pulse-echo overlap. Rhinolophids escape this constraint by Doppler shift compensation and, importantly, can exploit advantages associated with the emission of both high-frequency and long-duration calls. Low frequencies are unsuited for the detection of small prey, and low repetition rates may limit prey detection rates. Echolocation parameters may therefore constrain maximum body size in aerial-hawking bats. PMID:10562518

  18. Automated call tracking systems

    SciTech Connect

    Hardesty, C.

    1993-03-01

    User Services groups are on the front line with user support. We are the first to hear about problems. The speed, accuracy, and intelligence with which we respond determines the user`s perception of our effectiveness and our commitment to quality and service. To keep pace with the complex changes at our sites, we must have tools to help build a knowledge base of solutions, a history base of our users, and a record of every problem encountered. Recently, I completed a survey of twenty sites similar to the National Energy Research Supercomputer Center (NERSC). This informal survey reveals that 27% of the sites use a paper system to log calls, 60% employ homegrown automated call tracking systems, and 13% use a vendor-supplied system. Fifty-four percent of those using homegrown systems are exploring the merits of switching to a vendor-supplied system. The purpose of this paper is to provide guidelines for evaluating a call tracking system. In addition, insights are provided to assist User Services groups in selecting a system that fits their needs.

  19. POVME: An Algorithm for Measuring Binding-Pocket Volumes

    PubMed Central

    Durrant, Jacob D.; de Oliveira, César Augusto F.; McCammon, J. Andrew

    2011-01-01

    Researchers engaged in computer-aided drug design often wish to measure the volume of a ligand-binding pocket in order to predict pharmacology. We have recently developed a simple algorithm, called POVME (POcket Volume MEasurer), for this purpose. POVME is Python implemented, fast, and freely available. To demonstrate its utility, we use the new algorithm to study three members of the matrix-metalloproteinase family of proteins. Despite the structural similarity of these proteins, differences in binding-pocket dynamics are easily identified. PMID:21147010

  20. Parallel scheduling algorithms

    SciTech Connect

    Dekel, E.; Sahni, S.

    1983-01-01

    Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.

  1. Phasor algorithms of the SIM fringe estimation

    NASA Technical Reports Server (NTRS)

    Pan, Xiaopei

    2003-01-01

    The Space Interferometry Mission (SIM) will provide unprecedented micro-arcsecond (pas) precision to search for extra-solar planets and possible life in the universe. SIM will also revolutionize our understanding of the dynamics and evolutions of the local universe through hundred-fold improvements of inertial astrometry measurements. SIM has two so-called guide interferometers to provide stable inertial orientation knowledge of the baseline, and a science interferometer to measure target fringes. The guide and science measurements are based on the fringe phase measurements using a CCD detector. One of the key issues with SIM is to develop a new algorithm for calculation of fringe parameters. Not only astrometric results need that new algorithm, but also real-time fringe tracking requires a new method to calculate phase and visibility fast and accurately. The formulas for the phasor algorithms for fringe estimation are presented. The signal-noise ratio performances of the fringe quadratures are demonstrated. The advantages of phasor algorithms for application of fast fringe tracking and on-board data compression are discussed.

  2. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  3. Cascade Error Projection: A New Learning Algorithm

    NASA Technical Reports Server (NTRS)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  4. QuateXelero: An Accelerated Exact Network Motif Detection Algorithm

    PubMed Central

    Khakabimamaghani, Sahand; Sharafuddin, Iman; Dichter, Norbert; Koch, Ina; Masoudi-Nejad, Ali

    2013-01-01

    Finding motifs in biological, social, technological, and other types of networks has become a widespread method to gain more knowledge about these networks’ structure and function. However, this task is very computationally demanding, because it is highly associated with the graph isomorphism which is an NP problem (not known to belong to P or NP-complete subsets yet). Accordingly, this research is endeavoring to decrease the need to call NAUTY isomorphism detection method, which is the most time-consuming step in many existing algorithms. The work provides an extremely fast motif detection algorithm called QuateXelero, which has a Quaternary Tree data structure in the heart. The proposed algorithm is based on the well-known ESU (FANMOD) motif detection algorithm. The results of experiments on some standard model networks approve the overal superiority of the proposed algorithm, namely QuateXelero, compared with two of the fastest existing algorithms, G-Tries and Kavosh. QuateXelero is especially fastest in constructing the central data structure of the algorithm from scratch based on the input network. PMID:23874498

  5. QuateXelero: an accelerated exact network motif detection algorithm.

    PubMed

    Khakabimamaghani, Sahand; Sharafuddin, Iman; Dichter, Norbert; Koch, Ina; Masoudi-Nejad, Ali

    2013-01-01

    Finding motifs in biological, social, technological, and other types of networks has become a widespread method to gain more knowledge about these networks' structure and function. However, this task is very computationally demanding, because it is highly associated with the graph isomorphism which is an NP problem (not known to belong to P or NP-complete subsets yet). Accordingly, this research is endeavoring to decrease the need to call NAUTY isomorphism detection method, which is the most time-consuming step in many existing algorithms. The work provides an extremely fast motif detection algorithm called QuateXelero, which has a Quaternary Tree data structure in the heart. The proposed algorithm is based on the well-known ESU (FANMOD) motif detection algorithm. The results of experiments on some standard model networks approve the overal superiority of the proposed algorithm, namely QuateXelero, compared with two of the fastest existing algorithms, G-Tries and Kavosh. QuateXelero is especially fastest in constructing the central data structure of the algorithm from scratch based on the input network. PMID:23874498

  6. Scalable Nearest Neighbor Algorithms for High Dimensional Data.

    PubMed

    Muja, Marius; Lowe, David G

    2014-11-01

    For many computer vision and machine learning problems, large training sets are key for good performance. However, the most computationally expensive part of many computer vision and machine learning algorithms consists of finding nearest neighbor matches to high dimensional vectors that represent the training data. We propose new algorithms for approximate nearest neighbor matching and evaluate and compare them with previous algorithms. For matching high dimensional features, we find two algorithms to be the most efficient: the randomized k-d forest and a new algorithm proposed in this paper, the priority search k-means tree. We also propose a new algorithm for matching binary features by searching multiple hierarchical clustering trees and show it outperforms methods typically used in the literature. We show that the optimal nearest neighbor algorithm and its parameters depend on the data set characteristics and describe an automated configuration procedure for finding the best algorithm to search a particular data set. In order to scale to very large data sets that would otherwise not fit in the memory of a single machine, we propose a distributed nearest neighbor matching framework that can be used with any of the algorithms described in the paper. All this research has been released as an open source library called fast library for approximate nearest neighbors (FLANN), which has been incorporated into OpenCV and is now one of the most popular libraries for nearest neighbor matching. PMID:26353063

  7. Fast-earth: A global image caching architecture for fast access to remote-sensing data

    NASA Astrophysics Data System (ADS)

    Talbot, B. G.; Talbot, L. M.

    We introduce Fast-Earth, a novel server architecture that enables rapid access to remote sensing data. Fast-Earth subdivides a WGS-84 model of the earth into small 400 × 400 meter regions with fixed locations, called plats. The resulting 3,187,932,913 indexed plats are accessed with a rapid look-up algorithm. Whereas many traditional databases store large original images as a series by collection time, requiring long searches and slow access times for user queries, the Fast-Earth architecture enables rapid access. We have prototyped a system in conjunction with a Fast-Responder mobile app to demonstrate and evaluate the concepts. We found that new data could be indexed rapidly in about 10 minutes/terabyte, high-resolution images could be chipped in less than a second, and 250 kB image chips could be delivered over a 3G network in about 3 seconds. The prototype server implemented on a very small computer could handle 100 users, but the concept is scalable. Fast-Earth enables dramatic advances in rapid dissemination of remote sensing data for mobile platforms as well as desktop enterprises.

  8. News Conference: ASE '09 invigorates participants 34th Stirling Physics Meeting: IOP in Scotland meets to debate curriculum and celebrate success From the News to the Classroom: A positive outlook for science as Obama takes up US presidency Workshop: Nanoschool educates Finnish teachers CERN: Act fast: High School Teacher Programme calls for applicants London Physics Teachers' Network: Teachers' Network Day has an international flavour CERN: LHC timetabled to restart in the summer

    NASA Astrophysics Data System (ADS)

    2009-03-01

    Conference: ASE '09 invigorates participants 34th Stirling Physics Meeting: IOP in Scotland meets to debate curriculum and celebrate success From the News to the Classroom: A positive outlook for science as Obama takes up US presidency Workshop: Nanoschool educates Finnish teachers CERN: Act fast: High School Teacher Programme calls for applicants London Physics Teachers' Network: Teachers' Network Day has an international flavour CERN: LHC timetabled to restart in the summer

  9. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  10. Is alarm calling risky? Marmots avoid calling from risky places

    PubMed Central

    Collier, Travis C.; Blumstein, Daniel T.; Girod, Lewis; Taylor, Charles E.

    2010-01-01

    Alarm calling is common in many species. A prevalent assumption is that calling puts the vocalizing individual at increased risk of predation. If calling is indeed costly, we need special explanations for its evolution and maintenance. In some, but not all species, callers vocalize away from safety and thus may be exposed to an increased risk of predation. However, for species that emit bouts with one or a few calls, it is often difficult to identify the caller and find the precise location where a call was produced. We analyzed the spatial dynamics of yellow-bellied marmot (Marmota flaviventris) alarm calling using an acoustic localization system to determine the location from which calls were emitted. Marmots almost always called from positions close to the safety of their burrows, and, if they produced more than one alarm call, tended to end their calling bouts closer to safety than they started them. These results suggest that for this species, potential increased predation risk from alarm calling is greatly mitigated and indeed calling may have limited predation costs. PMID:21116460

  11. Fast and practical parallel polynomial interpolation

    SciTech Connect

    Egecioglu, O.; Gallopoulos, E.; Koc, C.K.

    1987-01-01

    We present fast and practical parallel algorithms for the computation and evaluation of interpolating polynomials. The algorithms make use of fast parallel prefix techniques for the calculation of divided differences in the Newton representation of the interpolating polynomial. For n + 1 given input pairs the proposed interpolation algorithm requires 2 (log (n + 1)) + 2 parallel arithmetic steps and circuit size O(n/sup 2/). The algorithms are numerically stable and their floating-point implementation results in error accumulation similar to that of the widely used serial algorithms. This is in contrast to other fast serial and parallel interpolation algorithms which are subject to much larger roundoff. We demonstrate that in a distributed memory environment context, a cube connected system is very suitable for the algorithms' implementation, exhibiting very small communication cost. As further advantages we note that our techniques do not require equidistant points, preconditioning, or use of the Fast Fourier Transform. 21 refs., 4 figs.

  12. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    PubMed Central

    Wang, Qing; Chen, Hua; Zhao, Guohuang; Chen, Bin; Wang, Pichao

    2013-01-01

    In this paper, a novel direction of arrival (DOA) estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC) algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC) algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments. PMID:23296331

  13. New Effective Multithreaded Matching Algorithms

    SciTech Connect

    Manne, Fredrik; Halappanavar, Mahantesh

    2014-05-19

    Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.

  14. [The design of handheld fast ECG detector].

    PubMed

    Shi, Bo; Zhang, Genxuan; Tsau, Young

    2013-03-01

    A new handheld fast ECG detector based on low gain amplifier, the high resolution analog to digital converter, the real-time digital filter, fast P-QRS-T wave detection and abstraction algorithm was designed. The results showed that the ECG detector can meet the requirements of fast detecting heart rate and ECG P-QRS-T waveforms. PMID:23777065

  15. NMDAR-Dependent Control of Call Duration in Xenopus laevis

    PubMed Central

    Katzen, Abraham W.; Rhodes, Heather J.; Yamaguchi, Ayako

    2010-01-01

    Many rhythmic behaviors, such as locomotion and vocalization, involve temporally dynamic patterns. How does the brain generate temporal complexity? Here, we use the vocal central pattern generator (CPG) of Xenopus laevis to address this question. Isolated brains can elicit fictive vocalizations, allowing us to study the CPG in vitro. The X. laevis advertisement call is temporally modulated; calls consist of rhythmic click trills that alternate between fast (∼60 Hz) and slow (∼30 Hz) rates. We investigated the role of two CPG nuclei—the laryngeal motor nucleus (n.IX–X) and the dorsal tegmental area of the medulla (DTAM)—in setting rhythm frequency and call durations. We discovered a local field potential wave in DTAM that coincides with fictive fast trills and phasic activity that coincides with fictive clicks. After disrupting n.IX–X connections, the wave persists, whereas phasic activity disappears. Wave duration was temperature dependent and correlated with fictive fast trills. This correlation persisted when wave duration was modified by temperature manipulations. Selectively cooling DTAM, but not n.IX–X, lengthened fictive call and fast trill durations, whereas cooling either nucleus decelerated the fictive click rate. The N-methyl-d-aspartate receptor (NMDAR) antagonist dAPV blocked waves and fictive fast trills, suggesting that the wave controls fast trill activation and, consequently, call duration. We conclude that two functionally distinct CPG circuits exist: 1) a pattern generator in DTAM that determines call duration and 2) a rhythm generator (spanning DTAM and n.IX–X) that determines click rates. The newly identified DTAM pattern generator provides an excellent model for understanding NDMAR-dependent rhythmic circuits. PMID:20393064

  16. Smarter clustering methods for SNP genotype calling

    PubMed Central

    Lin, Yan; Tseng, George C.; Cheong, Soo Yeon; Bean, Lora J. H.; Sherman, Stephanie L.; Feingold, Eleanor

    2008-01-01

    Motivation: Most genotyping technologies for single nucleotide polymorphism (SNP) markers use standard clustering methods to ‘call’ the SNP genotypes. These methods are not always optimal in distinguishing the genotype clusters of a SNP because they do not take advantage of specific features of the genotype calling problem. In particular, when family data are available, pedigree information is ignored. Furthermore, prior information about the distribution of the measurements for each cluster can be used to choose an appropriate model-based clustering method and can significantly improve the genotype calls. One special genotyping problem that has never been discussed in the literature is that of genotyping of trisomic individuals, such as individuals with Down syndrome. Calling trisomic genotypes is a more complicated problem, and the addition of external information becomes very important. Results: In this article, we discuss the impact of incorporating external information into clustering algorithms to call the genotypes for both disomic and trisomic data. We also propose two new methods to call genotypes using family data. One is a modification of the K-means method and uses the pedigree information by updating all members of a family together. The other is a likelihood-based method that combines the Gaussian or beta-mixture model with pedigree information. We compare the performance of these two methods and some other existing methods using simulation studies. We also compare the performance of these methods on a real dataset generated by the Illumina platform (www.illumina.com). Availability: The R code for the family-based genotype calling methods (SNPCaller) is available to be downloaded from the following website: http://watson.hgen.pitt.edu/register. Contact: liny@upmc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:18826959

  17. FAST Conformational Searches by Balancing Exploration/Exploitation Trade-Offs.

    PubMed

    Zimmerman, Maxwell I; Bowman, Gregory R

    2015-12-01

    Molecular dynamics simulations are a powerful means of understanding conformational changes. However, it is still difficult to simulate biologically relevant time scales without the use of specialized supercomputers. Here, we introduce a goal-oriented sampling method, called fluctuation amplification of specific traits (FAST), for extending the capabilities of commodity hardware. This algorithm rapidly searches conformational space for structures with desired properties by balancing trade-offs between focused searches around promising solutions (exploitation) and trying novel solutions (exploration). FAST was inspired by the hypothesis that many physical properties have an overall gradient in conformational space, akin to the energetic gradients that are known to guide proteins to their folded states. For example, we expect that transitioning from a conformation with a small solvent-accessible surface area to one with a large surface area will require passing through a series of conformations with steadily increasing surface areas. We demonstrate that such gradients are common through retrospective analysis of existing Markov state models (MSMs). Then we design the FAST algorithm to exploit these gradients to find structures with desired properties by (1) recognizing and amplifying structural fluctuations along gradients that optimize a selected physical property whenever possible, (2) overcoming barriers that interrupt these overall gradients, and (3) rerouting to discover alternative paths when faced with insurmountable barriers. To test FAST, we compare its performance to other methods for three common types of problems: (1) identifying unexpected binding pockets, (2) discovering the preferred paths between specific structures, and (3) folding proteins. Our conservative estimate is that FAST outperforms conventional simulations and an adaptive sampling algorithm by at least an order of magnitude. Furthermore, FAST yields both the proper thermodynamics and

  18. I. Thermal evolution of Ganymede and implications for surface features. II. Magnetohydrodynamic constraints on deep zonal flow in the giant planets. III. A fast finite-element algorithm for two-dimensional photoclinometry

    SciTech Connect

    Kirk, R.L.

    1987-01-01

    Thermal evolution of Ganymede from a hot start is modeled. On cooling ice I forms above the liquid H/sub 2/O and dense ices at higher entropy below it. A novel diapiric instability is proposed to occur if the ocean thins enough, mixing these layers and perhaps leading to resurfacing and groove formation. Rising warm-ice diapirs may cause a dramatic heat pulse and fracturing at the surface, and provide material for surface flows. Timing of the pulse depends on ice rheology but could agree with crater-density dates for resurfacing. Origins of the Ganymede-Callisto dichotomy in light of the model are discussed. Based on estimates of the conductivity of H/sub 2/ (Jupiter, Saturn) and H/sub 2/O (Uranus, Neptune), the zonal winds of the giant planets will, if they penetrate below the visible atmosphere, interact with the magnetic field well outside the metallic core. The scaling argument is supported by a model with zonal velocity constant on concentric cylinders, the Lorentz torque on each balanced by viscous stresses. The problem of two-dimensional photoclinometry, i.e. reconstruction of a surface from its image, is formulated in terms of finite elements and a fast algorithm using Newton-SOR iteration accelerated by multigridding is presented.

  19. Fast and Adaptive Sparse Precision Matrix Estimation in High Dimensions

    PubMed Central

    Liu, Weidong; Luo, Xi

    2014-01-01

    This paper proposes a new method for estimating sparse precision matrices in the high dimensional setting. It has been popular to study fast computation and adaptive procedures for this problem. We propose a novel approach, called Sparse Column-wise Inverse Operator, to address these two issues. We analyze an adaptive procedure based on cross validation, and establish its convergence rate under the Frobenius norm. The convergence rates under other matrix norms are also established. This method also enjoys the advantage of fast computation for large-scale problems, via a coordinate descent algorithm. Numerical merits are illustrated using both simulated and real datasets. In particular, it performs favorably on an HIV brain tissue dataset and an ADHD resting-state fMRI dataset. PMID:25750463

  20. Fast Point-Feature Label Placement for Dynamic Visualizations

    SciTech Connect

    Mote, Kevin D.

    2008-01-21

    This paper presents a brand new approach for automated feature-point label de-confliction. It outlines a method for labeling the point-features on dynamic maps in real time without a pre-processing stage. The algorithm described provides an efficient, scalable, and exceptionally fast method of labeling interactive charts and diagrams, offering interaction speeds at multiple frames per second on maps with tens of thousands of nodes. To accomplish this, the algorithm employs an efficient approach -- called the "trellis strategy" -- along with a unique label candidate cost analysis, to determine the “least expensive” label configuration. The speed and scalability of this approach makes it suitable for the complex and ever-accelerating demands of interactive visual analytic applications.

  1. A New Fast Algorithm of Stereo Vision

    NASA Astrophysics Data System (ADS)

    Shen, Jun; Castan, Serge

    1986-06-01

    In this paper, the DRF (Difference of Recursive Filters) method is proposed for stereo vision. One obtains the BLI's (Binary Laplacian Image) of the stereopair images by DRF method and the disparities are found by the correlation between the BLI's. Some experimental results are presented also.

  2. Improved multiprocessor garbage collection algorithms

    SciTech Connect

    Newman, I.A.; Stallard, R.P.; Woodward, M.C.

    1983-01-01

    Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.

  3. Distributed Function Mining for Gene Expression Programming Based on Fast Reduction.

    PubMed

    Deng, Song; Yue, Dong; Yang, Le-chan; Fu, Xiong; Feng, Ya-zhou

    2016-01-01

    For high-dimensional and massive data sets, traditional centralized gene expression programming (GEP) or improved algorithms lead to increased run-time and decreased prediction accuracy. To solve this problem, this paper proposes a new improved algorithm called distributed function mining for gene expression programming based on fast reduction (DFMGEP-FR). In DFMGEP-FR, fast attribution reduction in binary search algorithms (FAR-BSA) is proposed to quickly find the optimal attribution set, and the function consistency replacement algorithm is given to solve integration of the local function model. Thorough comparative experiments for DFMGEP-FR, centralized GEP and the parallel gene expression programming algorithm based on simulated annealing (parallel GEPSA) are included in this paper. For the waveform, mushroom, connect-4 and musk datasets, the comparative results show that the average time-consumption of DFMGEP-FR drops by 89.09%%, 88.85%, 85.79% and 93.06%, respectively, in contrast to centralized GEP and by 12.5%, 8.42%, 9.62% and 13.75%, respectively, compared with parallel GEPSA. Six well-studied UCI test data sets demonstrate the efficiency and capability of our proposed DFMGEP-FR algorithm for distributed function mining. PMID:26751200

  4. Fast unmixing of multispectral optoacoustic data with vertex component analysis

    NASA Astrophysics Data System (ADS)

    Luís Deán-Ben, X.; Deliolanis, Nikolaos C.; Ntziachristos, Vasilis; Razansky, Daniel

    2014-07-01

    Multispectral optoacoustic tomography enhances the performance of single-wavelength imaging in terms of sensitivity and selectivity in the measurement of the biodistribution of specific chromophores, thus enabling functional and molecular imaging applications. Spectral unmixing algorithms are used to decompose multi-spectral optoacoustic data into a set of images representing distribution of each individual chromophoric component while the particular algorithm employed determines the sensitivity and speed of data visualization. Here we suggest using vertex component analysis (VCA), a method with demonstrated good performance in hyperspectral imaging, as a fast blind unmixing algorithm for multispectral optoacoustic tomography. The performance of the method is subsequently compared with a previously reported blind unmixing procedure in optoacoustic tomography based on a combination of principal component analysis (PCA) and independent component analysis (ICA). As in most practical cases the absorption spectrum of the imaged chromophores and contrast agents are known or can be determined using e.g. a spectrophotometer, we further investigate the so-called semi-blind approach, in which the a priori known spectral profiles are included in a modified version of the algorithm termed constrained VCA. The performance of this approach is also analysed in numerical simulations and experimental measurements. It has been determined that, while the standard version of the VCA algorithm can attain similar sensitivity to the PCA-ICA approach and have a robust and faster performance, using the a priori measured spectral information within the constrained VCA does not generally render improvements in detection sensitivity in experimental optoacoustic measurements.

  5. HDF5-FastQuery: An API for Simplifying Access to Data Storage,Retrieval, Indexing and Querying

    SciTech Connect

    Bethel, E. Wes; Gosink, Luke; Shalf, John; Stockinger, Kurt; Wu,Kesheng

    2006-06-15

    This work focuses on research and development activities that bridge a gap between fundamental data management technology index, query, storage and retrieval and use of such technology in computational and computer science algorithms and applications. The work has resulted in a streamlined applications programming interface (API) that simplifies data storage and retrieval using the HDF5 data I/O library, and eases use of the FastBit compressed bitmap indexing software for data indexing/querying. The API, which we call HDF5-FastQuery, will have broad applications in domain sciences as well as associated data analysis and visualization applications.

  6. Fast Steerable Principal Component Analysis

    PubMed Central

    Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit

    2016-01-01

    Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801

  7. Fast Parallel Computation Of Multibody Dynamics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Kwan, Gregory L.; Bagherzadeh, Nader

    1996-01-01

    Constraint-force algorithm fast, efficient, parallel-computation algorithm for solving forward dynamics problem of multibody system like robot arm or vehicle. Solves problem in minimum time proportional to log(N) by use of optimal number of processors proportional to N, where N is number of dynamical degrees of freedom: in this sense, constraint-force algorithm both time-optimal and processor-optimal parallel-processing algorithm.

  8. A Fast Edge Preserving Bayesian Reconstruction Method for Parallel Imaging Applications in Cardiac MRI

    PubMed Central

    Singh, Gurmeet; Raj, Ashish; Kressler, Bryan; Nguyen, Thanh D.; Spincemaille, Pascal; Zabih, Ramin; Wang, Yi

    2010-01-01

    Among recent parallel MR imaging reconstruction advances, a Bayesian method called Edge-preserving Parallel Imaging with GRAph cut Minimization (EPIGRAM) has been demonstrated to significantly improve signal to noise ratio (SNR) compared to conventional regularized sensitivity encoding (SENSE) method. However, EPIGRAM requires a large number of iterations in proportion to the number of intensity labels in the image, making it computationally expensive for high dynamic range images. The objective of this study is to develop a Fast EPIGRAM reconstruction based on the efficient binary jump move algorithm that provides a logarithmic reduction in reconstruction time while maintaining image quality. Preliminary in vivo validation of the proposed algorithm is presented for 2D cardiac cine MR imaging and 3D coronary MR angiography at acceleration factors of 2-4. Fast EPIGRAM was found to provide similar image quality to EPIGRAM and maintain the previously reported SNR improvement over regularized SENSE, while reducing EPIGRAM reconstruction time by 25-50 times. PMID:20939095

  9. A Fast-Time Simulation Environment for Airborne Merging and Spacing Research

    NASA Technical Reports Server (NTRS)

    Bussink, Frank J. L.; Doble, Nathan A.; Barmore, Bryan E.; Singer, Sharon

    2005-01-01

    As part of NASA's Distributed Air/Ground Traffic Management (DAG-TM) effort, NASA Langley Research Center is developing concepts and algorithms for merging multiple aircraft arrival streams and precisely spacing aircraft over the runway threshold. An airborne tool has been created for this purpose, called Airborne Merging and Spacing for Terminal Arrivals (AMSTAR). To evaluate the performance of AMSTAR and complement human-in-the-loop experiments, a simulation environment has been developed that enables fast-time studies of AMSTAR operations. The environment is based on TMX, a multiple aircraft desktop simulation program created by the Netherlands National Aerospace Laboratory (NLR). This paper reviews the AMSTAR concept, discusses the integration of the AMSTAR algorithm into TMX and the enhancements added to TMX to support fast-time AMSTAR studies, and presents initial simulation results.

  10. Fast polyhedral cell sorting for interactive rendering of unstructured grids

    SciTech Connect

    Combra, J; Klosowski, J T; Max, N; Silva, C T; Williams, P L

    1998-10-30

    Direct volume rendering based on projective methods works by projecting, in visibility order, the polyhedral cells of a mesh onto the image plane, and incrementally compositing the cell's color and opacity into the final image. Crucial to this method is the computation of a visibility ordering of the cells. If the mesh is ''well-behaved'' (acyclic and convex), then the MPVO method of Williams provides a very fast sorting algorithm; however, this method only computes an approximate ordering in general datasets, resulting in visual artifacts when rendered. A recent method of Silva et al. removed the assumption that the mesh is convex, by means of a sweep algorithm used in conjunction with the MPVO method; their algorithm is substantially faster than previous exact methods for general meshes. In this paper we propose a new technique, which we call BSP-XMPVO, which is based on a fast and simple way of using binary space partitions on the boundary elements of the mesh to augment the ordering produced by MPVO. Our results are shown to be orders of magnitude better than previous exact methods of sorting cells.

  11. The Xmath Integration Algorithm

    ERIC Educational Resources Information Center

    Bringslid, Odd

    2009-01-01

    The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…

  12. Uni10: an open-source library for tensor network algorithms

    NASA Astrophysics Data System (ADS)

    Kao, Ying-Jer; Hsieh, Yun-Da; Chen, Pochung

    2015-09-01

    We present an object-oriented open-source library for developing tensor network algorithms written in C++ called Uni10. With Uni10, users can build a symmetric tensor from a collection of bonds, while the bonds are constructed from a list of quantum numbers associated with different quantum states. It is easy to label and permute the indices of the tensors and access a block associated with a particular quantum number. Furthermore a network class is used to describe arbitrary tensor network structure and to perform network contractions efficiently. We give an overview of the basic structure of the library and the hierarchy of the classes. We present examples of the construction of a spin-1 Heisenberg Hamiltonian and the implementation of the tensor renormalization group algorithm to illustrate the basic usage of the library. The library described here is particularly well suited to explore and fast prototype novel tensor network algorithms and to implement highly efficient codes for existing algorithms.

  13. A new algorithm for attitude-independent magnetometer calibration

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Shuster, Malcolm D.

    1994-01-01

    A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.

  14. CALL Essentials: Principles and Practice in CALL Classrooms

    ERIC Educational Resources Information Center

    Egbert, Joy

    2005-01-01

    Computers and the Internet offer innovative teachers exciting ways to enhance their pedagogy and capture their students' attention. These technologies have created a growing field of inquiry, computer-assisted language learning (CALL). As new technologies have emerged, teaching professionals have adapted them to support teachers and learners in…

  15. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  16. The fast Hartley transform

    NASA Astrophysics Data System (ADS)

    Mar, Mark H.

    1990-11-01

    The purpose of this paper is to report the results of testing the fast Hartley transform (FHT) and comparing it with the fast Fourier transform (FFT). All the definitions and equations in this paper are quoted and cited from the series of references. The author of this report developed a FORTRAN program which computes the Hartley transform. He tested the program with a generalized electromagnetic pulse waveform and verified the results with the known value. Fourier analysis is an essential tool to obtain frequency domain information from transient time domain signals. The FFT is a popular tool to process many of today's audio and electromagnetic signals. System frequency response, digital filtering of signals, and signal power spectrum are the most practical applications of the FFT. However, the Fourier integral transform of the FFT requires computer resources appropriate for the complex arithmetic operations. On the other hand, the FHT can accomplish the same results faster and requires fewer computer resources. The FHT is twice as fast as the FFT, uses only half the computer resources, and so could be more useful than the FFT in typical applications such as spectral analysis, signal processing, and convolution. This paper presents a FORTRAN computer program for the FHT algorithm along with a brief description and compares the results and performance of the FHT and the FFT algorithms.

  17. Acoustic signal detection of manatee calls

    NASA Astrophysics Data System (ADS)

    Niezrecki, Christopher; Phillips, Richard; Meyer, Michael; Beusse, Diedrich O.

    2003-04-01

    The West Indian manatee (trichechus manatus latirostris) has become endangered partly because of a growing number of collisions with boats. A system to warn boaters of the presence of manatees, that can signal to boaters that manatees are present in the immediate vicinity, could potentially reduce these boat collisions. In order to identify the presence of manatees, acoustic methods are employed. Within this paper, three different detection algorithms are used to detect the calls of the West Indian manatee. The detection systems are tested in the laboratory using simulated manatee vocalizations from an audio compact disc. The detection method that provides the best overall performance is able to correctly identify ~=96% of the manatee vocalizations. However the system also results in a false positive rate of ~=16%. The results of this work may ultimately lead to the development of a manatee warning system that can warn boaters of the presence of manatees.

  18. Close Call: Breaking the Rules.

    ERIC Educational Resources Information Center

    Journal of Adventure Education and Outdoor Leadership, 1993

    1993-01-01

    Contrary to a rule to never teach students to lead climb, an instructor taught several youth to lead climb at a parent's request. These students planned to pursue rock climbing on their own after they left school, and preparing them was deemed a safety precaution. Analysis of this "close call" offers guidelines for introducing students to lead…

  19. Formative Considerations Using Integrative CALL.

    ERIC Educational Resources Information Center

    Callahan, Philip; Shaver, Peter

    2001-01-01

    Addresses technical and learning issues relating to a formative implementation of a computer assisted language learning (CALL) browser-based intermediate Russian program. Instruction took place through a distance education implementation and in a grouped classroom using a local-area network. Learners indicated the software was clear, motivating,…

  20. Learning as Calling and Responding

    ERIC Educational Resources Information Center

    Jons, Lotta

    2014-01-01

    According to Martin Buber's philosophy of dialogue, our being-in-the-world is to be conceived of as an existential dialogue. Elsewhere, I have conceptualized the teacher-student-relation accordingly (see Jons 2008), as a matter of calling and responding. The conceptualization rests on a secularised notion of vocation, paving way for…

  1. Nursing care as a calling.

    PubMed

    Raatikainen, R

    1997-06-01

    A calling is a deep desire to devote oneself to serving people according to the high values of the task or profession. The aim of this study is to clarify the relationship between a calling experience and professional knowledge, nursing action and motivation. The data were collected from all the registered nurses (n = 179) at five hospitals. The response was 70%. The nurses who were committed to their profession and experienced their job as a calling, had a good knowledge about the ill feeling and maladjustment of their patients and were also good sources of support for their patients. They understood the importance of family ties and offered support to their patients' families. They were aware of the needs of dying patients and their concern with spiritual questions, and satisfied these needs well. It was characteristic for them to collaborate closely within a team, to experience the content of their work as enriching and to possess proficient professional abilities. They were therefore excellent in supporting both the individual patient and his or her family. They had a deep understanding of the whole process of patient care. According to these results the calling experience is not in conflict with professional principles. PMID:9181405

  2. Fast Tensor Image Morphing for Elastic Registration

    PubMed Central

    Yap, Pew-Thian; Wu, Guorong; Zhu, Hongtu; Lin, Weili; Shen, Dinggang

    2009-01-01

    We propose a novel algorithm, called Fast Tensor Image Morphing for Elastic Registration or F-TIMER. F-TIMER leverages multiscale tensor regional distributions and local boundaries for hierarchically driving deformable matching of tensor image volumes. Registration is achieved by aligning a set of automatically determined structural landmarks, via solving a soft correspondence problem. Based on the estimated correspondences, thin-plate splines are employed to generate a smooth, topology preserving, and dense transformation, and to avoid arbitrary mapping of non-landmark voxels. To mitigate the problem of local minima, which is common in the estimation of high dimensional transformations, we employ a hierarchical strategy where a small subset of voxels with more distinctive attribute vectors are first deployed as landmarks to estimate a relatively robust low-degrees-of-freedom transformation. As the registration progresses, an increasing number of voxels are permitted to participate in refining the correspondence matching. A scheme as such allows less conservative progression of the correspondence matching towards the optimal solution, and hence results in a faster matching speed. Results indicate that better accuracy can be achieved by F-TIMER, compared with other deformable registration algorithms [1, 2], with significantly reduced computation time cost of 4–14 folds. PMID:20426052

  3. Old And New Algorithms For Toeplitz Systems

    NASA Astrophysics Data System (ADS)

    Brent, Richard P.

    1988-02-01

    Toeplitz linear systems and Toeplitz least squares problems commonly arise in digital signal processing. In this paper we survey some old, "well known" algorithms and some recent algorithms for solving these problems. We concentrate our attention on algorithms which can be implemented efficiently on a variety of parallel machines (including pipelined vector processors and systolic arrays). We distinguish between algorithms which require inner products, and algorithms which avoid inner products, and thus are better suited to parallel implementation on some parallel architectures. Finally, we mention some "asymptotically fast" 0(n(log n)2) algorithms and compare them with 0(n2) algorithms.

  4. Efficient algorithms for linear dynamic inverse problems with known motion

    NASA Astrophysics Data System (ADS)

    Hahn, B. N.

    2014-03-01

    An inverse problem is called dynamic if the object changes during the data acquisition process. This occurs e.g. in medical applications when fast moving organs like the lungs or the heart are imaged. Most regularization methods are based on the assumption that the object is static during the measuring procedure. Hence, their application in the dynamic case often leads to serious motion artefacts in the reconstruction. Therefore, an algorithm has to take into account the temporal changes of the investigated object. In this paper, a reconstruction method that compensates for the motion of the object is derived for dynamic linear inverse problems. The algorithm is validated at numerical examples from computerized tomography.

  5. Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal Motion Planning in Many Dimensions*

    PubMed Central

    Janson, Lucas; Schmerling, Edward; Clark, Ashley; Pavone, Marco

    2015-01-01

    In this paper we present a novel probabilistic sampling-based motion planning algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is specifically aimed at solving complex motion planning problems in high-dimensional configuration spaces. This algorithm is proven to be asymptotically optimal and is shown to converge to an optimal solution faster than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT* algorithm performs a “lazy” dynamic programming recursion on a predetermined number of probabilistically-drawn samples to grow a tree of paths, which moves steadily outward in cost-to-arrive space. As such, this algorithm combines features of both single-query algorithms (chiefly RRT) and multiple-query algorithms (chiefly PRM), and is reminiscent of the Fast Marching Method for the solution of Eikonal equations. As a departure from previous analysis approaches that are based on the notion of almost sure convergence, the FMT* algorithm is analyzed under the notion of convergence in probability: the extra mathematical flexibility of this approach allows for convergence rate bounds—the first in the field of optimal sampling-based motion planning. Specifically, for a certain selection of tuning parameters and configuration spaces, we obtain a convergence rate bound of order O(n−1/d+ρ), where n is the number of sampled points, d is the dimension of the configuration space, and ρ is an arbitrarily small constant. We go on to demonstrate asymptotic optimality for a number of variations on FMT*, namely when the configuration space is sampled non-uniformly, when the cost is not arc length, and when connections are made based on the number of nearest neighbors instead of a fixed connection radius. Numerical experiments over a range of dimensions and obstacle configurations confirm our the-oretical and heuristic arguments by showing that FMT*, for a given execution time, returns substantially better solutions than either PRM* or RRT

  6. Leveraging Call Center Logs for Customer Behavior Prediction

    NASA Astrophysics Data System (ADS)

    Parvathy, Anju G.; Vasudevan, Bintu G.; Kumar, Abhishek; Balakrishnan, Rajesh

    Most major businesses use business process outsourcing for performing a process or a part of a process including financial services like mortgage processing, loan origination, finance and accounting and transaction processing. Call centers are used for the purpose of receiving and transmitting a large volume of requests through outbound and inbound calls to customers on behalf of a business. In this paper we deal specifically with the call centers notes from banks. Banks as financial institutions provide loans to non-financial businesses and individuals. Their call centers act as the nuclei of their client service operations and log the transactions between the customer and the bank. This crucial conversation or information can be exploited for predicting a customer’s behavior which will in turn help these businesses to decide on the next action to be taken. Thus the banks save considerable time and effort in tracking delinquent customers to ensure minimum subsequent defaulters. Majority of the time the call center notes are very concise and brief and often the notes are misspelled and use many domain specific acronyms. In this paper we introduce a novel domain specific spelling correction algorithm which corrects the misspelled words in the call center logs to meaningful ones. We also discuss a procedure that builds the behavioral history sequences for the customers by categorizing the logs into one of the predefined behavioral states. We then describe a pattern based predictive algorithm that uses temporal behavioral patterns mined from these sequences to predict the customer’s next behavioral state.

  7. Statistical behaviour of adaptive multilevel splitting algorithms in simple models

    SciTech Connect

    Rolland, Joran Simonnet, Eric

    2015-02-15

    Adaptive multilevel splitting algorithms have been introduced rather recently for estimating tail distributions in a fast and efficient way. In particular, they can be used for computing the so-called reactive trajectories corresponding to direct transitions from one metastable state to another. The algorithm is based on successive selection–mutation steps performed on the system in a controlled way. It has two intrinsic parameters, the number of particles/trajectories and the reaction coordinate used for discriminating good or bad trajectories. We investigate first the convergence in law of the algorithm as a function of the timestep for several simple stochastic models. Second, we consider the average duration of reactive trajectories for which no theoretical predictions exist. The most important aspect of this work concerns some systems with two degrees of freedom. They are studied in detail as a function of the reaction coordinate in the asymptotic regime where the number of trajectories goes to infinity. We show that during phase transitions, the statistics of the algorithm deviate significatively from known theoretical results when using non-optimal reaction coordinates. In this case, the variance of the algorithm is peaking at the transition and the convergence of the algorithm can be much slower than the usual expected central limit behaviour. The duration of trajectories is affected as well. Moreover, reactive trajectories do not correspond to the most probable ones. Such behaviour disappears when using the optimal reaction coordinate called committor as predicted by the theory. We finally investigate a three-state Markov chain which reproduces this phenomenon and show logarithmic convergence of the trajectory durations.

  8. A Memetic Algorithm for Global Optimization of Multimodal Nonseparable Problems.

    PubMed

    Zhang, Geng; Li, Yangmin

    2016-06-01

    It is a big challenging issue of avoiding falling into local optimum especially when facing high-dimensional nonseparable problems where the interdependencies among vector elements are unknown. In order to improve the performance of optimization algorithm, a novel memetic algorithm (MA) called cooperative particle swarm optimizer-modified harmony search (CPSO-MHS) is proposed in this paper, where the CPSO is used for local search and the MHS for global search. The CPSO, as a local search method, uses 1-D swarm to search each dimension separately and thus converges fast. Besides, it can obtain global optimum elements according to our experimental results and analyses. MHS implements the global search by recombining different vector elements and extracting global optimum elements. The interaction between local search and global search creates a set of local search zones, where global optimum elements reside within the search space. The CPSO-MHS algorithm is tested and compared with seven other optimization algorithms on a set of 28 standard benchmarks. Meanwhile, some MAs are also compared according to the results derived directly from their corresponding references. The experimental results demonstrate a good performance of the proposed CPSO-MHS algorithm in solving multimodal nonseparable problems. PMID:26292352

  9. Fast Sampling-Based Whole-Genome Haplotype Block Recognition.

    PubMed

    Taliun, Daniel; Gamper, Johann; Leser, Ulf; Pattaro, Cristian

    2016-01-01

    Scaling linkage disequilibrium (LD) based haplotype block recognition to the entire human genome has always been a challenge. The best-known algorithm has quadratic runtime complexity and, even when sophisticated search space pruning is applied, still requires several days of computations. Here, we propose a novel sampling-based algorithm, called S-MIG (++), where the main idea is to estimate the area that most likely contains all haplotype blocks by sampling a very small number of SNP pairs. A subsequent refinement step computes the exact blocks by considering only the SNP pairs within the estimated area. This approach significantly reduces the number of computed LD statistics, making the recognition of haplotype blocks very fast. We theoretically and empirically prove that the area containing all haplotype blocks can be estimated with a very high degree of certainty. Through experiments on the 243,080 SNPs on chromosome 20 from the 1,000 Genomes Project, we compared our previous algorithm MIG (++) with the new S-MIG (++) and observed a runtime reduction from 2.8 weeks to 34.8 hours. In a parallelized version of the S-MIG (++) algorithm using 32 parallel processes, the runtime was further reduced to 5.1 hours. PMID:27045830

  10. Fast valve

    DOEpatents

    Van Dyke, William J.

    1992-01-01

    A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing.

  11. Fast valve

    DOEpatents

    Van Dyke, W.J.

    1992-04-07

    A fast valve is disclosed that can close on the order of 7 milliseconds. It is closed by the force of a compressed air spring with the moving parts of the valve designed to be of very light weight and the valve gate being of wedge shaped with O-ring sealed faces to provide sealing contact without metal to metal contact. The combination of the O-ring seal and an air cushion create a soft final movement of the valve closure to prevent the fast air acting valve from having a harsh closing. 4 figs.

  12. Join-Graph Propagation Algorithms

    PubMed Central

    Mateescu, Robert; Kask, Kalev; Gogate, Vibhav; Dechter, Rina

    2010-01-01

    The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes. PMID:20740057

  13. Project FAST.

    ERIC Educational Resources Information Center

    Essexville-Hampton Public Schools, MI.

    Described are components of Project FAST (Functional Analysis Systems Training) a nationally validated project to provide more effective educational and support services to learning disordered children and their regular elementary classroom teachers. The program is seen to be based on a series of modules of delivery systems ranging from mainstream…

  14. ALGORITHM FOR THE EVALUATION OF REDUCED WIGNER MATRICES

    SciTech Connect

    Prezeau, G.; Reinecke, M.

    2010-10-15

    Algorithms for the fast and exact computation of Wigner matrices are described and their application to a fast and massively parallel 4{pi} convolution code between a beam and a sky is also presented.

  15. Call to Restore Mesopotamian Marshlands

    NASA Astrophysics Data System (ADS)

    Showstack, Randy

    Call to restore Mesopotamian marshlands When the current military conflict in Iraq has concluded, a rehabilitation of that country should include a full assessment and action plan for restoring the marshlands of Mesopotamia, the United Nations Environment Programme said on 22 March. The marshlands, also known as the Fertile Crescent, could disappear within three to five years, according to UNEP. UNEP Executive Director Klaus Toepfer said the loss of the marshlands ``is an environmental catastrophe for this region and underscores the huge pressures facing wetlands and freshwater ecosystems across the world.''

  16. Fast and flexible interpolation via PUM with applications in population dynamics

    NASA Astrophysics Data System (ADS)

    Cavoretto, Roberto; De Rossi, Alessandra; Perracchione, Emma

    2016-06-01

    In this paper a new fast and flexible interpolation tool is shown. The Partition of Unity Method (PUM) is performed using Radial Basis Functions (RBFs) as local approximants. In particular, we present a new space-partitioning data structure extremely useful in applications because of its independence from the problem geometry. An application of such algorithm, in the context of wild herbivores in forests, shows that the ecosystem of the considered natural park is in a very delicate situation, for which the animal population could become extinguished. The determination of the so-called sensitivity surfaces, obtained with the new versatile partitioning structure, indicates some possible preventive measures to the park administrators.

  17. Orbital Advection by Interpolation: A Fast and Accurate Numerical Scheme for Super-Fast MHD Flows

    SciTech Connect

    Johnson, B M; Guan, X; Gammie, F

    2008-04-11

    In numerical models of thin astrophysical disks that use an Eulerian scheme, gas orbits supersonically through a fixed grid. As a result the timestep is sharply limited by the Courant condition. Also, because the mean flow speed with respect to the grid varies with position, the truncation error varies systematically with position. For hydrodynamic (unmagnetized) disks an algorithm called FARGO has been developed that advects the gas along its mean orbit using a separate interpolation substep. This relaxes the constraint imposed by the Courant condition, which now depends only on the peculiar velocity of the gas, and results in a truncation error that is more nearly independent of position. This paper describes a FARGO-like algorithm suitable for evolving magnetized disks. Our method is second order accurate on a smooth flow and preserves {del} {center_dot} B = 0 to machine precision. The main restriction is that B must be discretized on a staggered mesh. We give a detailed description of an implementation of the code and demonstrate that it produces the expected results on linear and nonlinear problems. We also point out how the scheme might be generalized to make the integration of other supersonic/super-fast flows more efficient. Although our scheme reduces the variation of truncation error with position, it does not eliminate it. We show that the residual position dependence leads to characteristic radial variations in the density over long integrations.

  18. Investigation of factors affecting RNA-seq gene expression calls

    PubMed Central

    Harati, Sahar; Phan, John H.; Wang, May D.

    2016-01-01

    RNA-seq enables quantification of the human transcriptome. Estimation of gene expression is a fundamental issue in the analysis of RNA-seq data. However, there is an inherent ambiguity in distinguishing between genes with very low expression and experimental or transcriptional noise. We conducted an exploratory investigation of some factors that may affect gene expression calls. We observed that the distribution of reads that map to exonic, intronic, and intergenic regions are distinct. These distributions may provide useful insights into the behavior of gene expression noise. Moreover, we observed that these distributions are qualitatively similar between two sequence mapping algorithms. Finally, we examined the relationship between gene length and gene expression calls, and observed that they are correlated. This preliminary investigation is important for RNA-seq gene expression analysis because it may lead to more effective algorithms for distinguishing between true gene expression and experimental or transcriptional noise. PMID:25571173

  19. Probability tree algorithm for general diffusion processes

    NASA Astrophysics Data System (ADS)

    Ingber, Lester; Chen, Colleen; Mondescu, Radu Paul; Muzzall, David; Renedo, Marco

    2001-11-01

    Motivated by path-integral numerical solutions of diffusion processes, PATHINT, we present a tree algorithm, PATHTREE, which permits extremely fast accurate computation of probability distributions of a large class of general nonlinear diffusion processes.

  20. Fast Fuzzy Arithmetic Operations

    NASA Technical Reports Server (NTRS)

    Hampton, Michael; Kosheleva, Olga

    1997-01-01

    In engineering applications of fuzzy logic, the main goal is not to simulate the way the experts really think, but to come up with a good engineering solution that would (ideally) be better than the expert's control, In such applications, it makes perfect sense to restrict ourselves to simplified approximate expressions for membership functions. If we need to perform arithmetic operations with the resulting fuzzy numbers, then we can use simple and fast algorithms that are known for operations with simple membership functions. In other applications, especially the ones that are related to humanities, simulating experts is one of the main goals. In such applications, we must use membership functions that capture every nuance of the expert's opinion; these functions are therefore complicated, and fuzzy arithmetic operations with the corresponding fuzzy numbers become a computational problem. In this paper, we design a new algorithm for performing such operations. This algorithm is applicable in the case when negative logarithms - log(u(x)) of membership functions u(x) are convex, and reduces computation time from O(n(exp 2))to O(n log(n)) (where n is the number of points x at which we know the membership functions u(x)).

  1. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  2. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  3. Faster algorithms for RNA-folding using the Four-Russians method

    PubMed Central

    2014-01-01

    Background The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov’s dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. Theoretical results We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method. The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). Practical results We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the

  4. Fast Reconstruction of Compact Context-Specific Metabolic Network Models

    PubMed Central

    Sauter, Thomas

    2014-01-01

    Systemic approaches to the study of a biological cell or tissue rely increasingly on the use of context-specific metabolic network models. The reconstruction of such a model from high-throughput data can routinely involve large numbers of tests under different conditions and extensive parameter tuning, which calls for fast algorithms. We present fastcore, a generic algorithm for reconstructing context-specific metabolic network models from global genome-wide metabolic network models such as Recon X. fastcore takes as input a core set of reactions that are known to be active in the context of interest (e.g., cell or tissue), and it searches for a flux consistent subnetwork of the global network that contains all reactions from the core set and a minimal set of additional reactions. Our key observation is that a minimal consistent reconstruction can be defined via a set of sparse modes of the global network, and fastcore iteratively computes such a set via a series of linear programs. Experiments on liver data demonstrate speedups of several orders of magnitude, and significantly more compact reconstructions, over a rival method. Given its simplicity and its excellent performance, fastcore can form the backbone of many future metabolic network reconstruction algorithms. PMID:24453953

  5. Fast reconstruction of compact context-specific metabolic network models.

    PubMed

    Vlassis, Nikos; Pacheco, Maria Pires; Sauter, Thomas

    2014-01-01

    Systemic approaches to the study of a biological cell or tissue rely increasingly on the use of context-specific metabolic network models. The reconstruction of such a model from high-throughput data can routinely involve large numbers of tests under different conditions and extensive parameter tuning, which calls for fast algorithms. We present fastcore, a generic algorithm for reconstructing context-specific metabolic network models from global genome-wide metabolic network models such as Recon X. fastcore takes as input a core set of reactions that are known to be active in the context of interest (e.g., cell or tissue), and it searches for a flux consistent subnetwork of the global network that contains all reactions from the core set and a minimal set of additional reactions. Our key observation is that a minimal consistent reconstruction can be defined via a set of sparse modes of the global network, and fastcore iteratively computes such a set via a series of linear programs. Experiments on liver data demonstrate speedups of several orders of magnitude, and significantly more compact reconstructions, over a rival method. Given its simplicity and its excellent performance, fastcore can form the backbone of many future metabolic network reconstruction algorithms. PMID:24453953

  6. Function of loud calls in wild bonobos.

    PubMed

    White, Frances; Waller, Michel; Boose, Klaree; Merrill, Michelle; Wood, Kimberley

    2015-07-20

    Under the social origins hypothesis, human language is thought to have evolved within the framework of non-human primate social contexts and relationships. Our two closest relatives, chimpanzees and bonobos, however, have very different social relationships and this may be reflected in their use of loud calls. Much of loud calling in the male-bonded and aggressive chimpanzee functions for male alliance formation and intercommunity aggression. Bonobos, however, are female bonded and less aggressive and little is known on the use and function of their loud calls. Data on frequencies, context, and locations of vocalizations were collected for wild bonobos, Pan paniscus, at the Lomako Forest study site in the Democratic Republic of the Congo from 1983 to 2009. Both males and females participated in loud calls used for inter-party communication. Calling and response rates by both males and females were higher during party fusion than party fission and were common at evening nesting. The distribution of loud calls within the community range of loud calls was not random with males calling significantly more towards the periphery of the range and females calling significantly more in central areas. Calling and party fission were common at food patches. Responses were more frequent for female calls than for male calls. Calling, followed by fusion, was more frequent when a small party called from a large patch. We conclude that bonobo females and males loud calls can function in inter-party communication to call others to large food patches. Females call to attract potential allies and males call to attract potential mates. Our results support the social hypothesis of the origin of language because differences in the function and use of loud calls reflect the differing social systems of chimpanzees and bonobos. Bonobo loud calls are important for female communication and function in party coordination and, unlike chimpanzees, are less important in male cooperative aggression

  7. Systolic array architecture for convolutional decoding algorithms: Viterbi algorithm and stack algorithm

    SciTech Connect

    Chang, C.Y.

    1986-01-01

    New results on efficient forms of decoding convolutional codes based on Viterbi and stack algorithms using systolic array architecture are presented. Some theoretical aspects of systolic arrays are also investigated. First, systolic array implementation of Viterbi algorithm is considered, and various properties of convolutional codes are derived. A technique called strongly connected trellis decoding is introduced to increase the efficient utilization of all the systolic array processors. The issues dealing with the composite branch metric generation, survivor updating, overall system architecture, throughput rate, and computations overhead ratio are also investigated. Second, the existing stack algorithm is modified and restated in a more concise version so that it can be efficiently implemented by a special type of systolic array called systolic priority queue. Three general schemes of systolic priority queue based on random access memory, shift register, and ripple register are proposed. Finally, a systematic approach is presented to design systolic arrays for certain general classes of recursively formulated algorithms.

  8. Thermostat algorithm for generating target ensembles

    NASA Astrophysics Data System (ADS)

    Bravetti, A.; Tapias, D.

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator.

  9. Thermostat algorithm for generating target ensembles.

    PubMed

    Bravetti, A; Tapias, D

    2016-02-01

    We present a deterministic algorithm called contact density dynamics that generates any prescribed target distribution in the physical phase space. Akin to the famous model of Nosé and Hoover, our algorithm is based on a non-Hamiltonian system in an extended phase space. However, the equations of motion in our case follow from contact geometry and we show that in general they have a similar form to those of the so-called density dynamics algorithm. As a prototypical example, we apply our algorithm to produce a Gibbs canonical distribution for a one-dimensional harmonic oscillator. PMID:26986320

  10. Call for Papers: Photonics in Switching

    NASA Astrophysics Data System (ADS)

    Wosinska, Lena; Glick, Madeleine

    2006-04-01

    Call for Papers: Photonics in Switching

    Guest Editors:

    Lena Wosinska, Royal Institute of Technology (KTH) / ICT Sweden Madeleine Glick, Intel Research, Cambridge, UK

    Technologies based on DWDM systems allow data transmission with bit rates of Tbit/s on a single fiber. To facilitate this enormous transmission volume, high-capacity and high-speed network nodes become inevitable in the optical network. Wideband switching, WDM switching, optical burst switching (OBS), and optical packet switching (OPS) are promising technologies for harnessing the bandwidth of WDM optical fiber networks in a highly flexible and efficient manner. As a number of key optical component technologies approach maturity, photonics in switching is becoming an increasingly attractive and practical solution for the next-generation of optical networks. The scope of this special issue is focused on the technology and architecture of optical switching nodes, including the architectural and algorithmic aspects of high-speed optical networks.

    Scope of Submission

    The scope of the papers includes, but is not limited to, the following topics:
    • WDM node architectures
    • Novel device technologies enabling photonics in switching, such as optical switch fabrics, optical memory, and wavelength conversion
    • Routing protocols
    • WDM switching and routing
    • Quality of service
    • Performance measurement and evaluation
    • Next-generation optical networks: architecture, signaling, and control
    • Traffic measurement and field trials
    • Optical burst and packet switching
    • OBS/OPS node architectures
    • Burst/Packet scheduling and routing algorithms
    • Contention resolution/avoidance strategies
    • Services and applications for OBS/OPS (e.g., grid networks, storage-area networks, etc.)
    • Burst assembly and ingress traffic shaping

    • Efficient demultiplexing algorithm for noncontiguous carriers

      NASA Technical Reports Server (NTRS)

      Thanawala, A. A.; Kwatra, S. C.; Jamali, M. M.; Budinger, J.

      1992-01-01

      A channel separation algorithm for the frequency division multiple access/time division multiplexing (FDMA/TDM) scheme is presented. It is shown that implementation using this algorithm can be more effective than the fast Fourier transform (FFT) algorithm when only a small number of carriers need to be selected from many, such as satellite Earth terminals. The algorithm is based on polyphase filtering followed by application of a generalized Walsh-Hadamard transform (GWHT). Comparison of the transform technique used in this algorithm with discrete Fourier transform (DFT) and FFT is given. Estimates of the computational rates and power requirements to implement this system are also given.

    • Fast Censored Linear Regression

      PubMed Central

      HUANG, YIJIAN

      2013-01-01

      Weighted log-rank estimating function has become a standard estimation method for the censored linear regression model, or the accelerated failure time model. Well established statistically, the estimator defined as a consistent root has, however, rather poor computational properties because the estimating function is neither continuous nor, in general, monotone. We propose a computationally efficient estimator through an asymptotics-guided Newton algorithm, in which censored quantile regression methods are tailored to yield an initial consistent estimate and a consistent derivative estimate of the limiting estimating function. We also develop fast interval estimation with a new proposal for sandwich variance estimation. The proposed estimator is asymptotically equivalent to the consistent root estimator and barely distinguishable in samples of practical size. However, computation time is typically reduced by two to three orders of magnitude for point estimation alone. Illustrations with clinical applications are provided. PMID:24347802

    • Haplotyping algorithms

      SciTech Connect

      Sobel, E.; Lange, K.; O`Connell, J.R.

      1996-12-31

      Haplotyping is the logical process of inferring gene flow in a pedigree based on phenotyping results at a small number of genetic loci. This paper formalizes the haplotyping problem and suggests four algorithms for haplotype reconstruction. These algorithms range from exhaustive enumeration of all haplotype vectors to combinatorial optimization by simulated annealing. Application of the algorithms to published genetic analyses shows that manual haplotyping is often erroneous. Haplotyping is employed in screening pedigrees for phenotyping errors and in positional cloning of disease genes from conserved haplotypes in population isolates. 26 refs., 6 figs., 3 tabs.

    • CAVITY CONTROL ALGORITHM

      SciTech Connect

      Tomasz Plawski, J. Hovater

      2010-09-01

      A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.

    • Fast probabilistic file fingerprinting for big data

      PubMed Central

      2013-01-01

      Background Biological data acquisition is raising new challenges, both in data analysis and handling. Not only is it proving hard to analyze the data at the rate it is generated today, but simply reading and transferring data files can be prohibitively slow due to their size. This primarily concerns logistics within and between data centers, but is also important for workstation users in the analysis phase. Common usage patterns, such as comparing and transferring files, are proving computationally expensive and are tying down shared resources. Results We present an efficient method for calculating file uniqueness for large scientific data files, that takes less computational effort than existing techniques. This method, called Probabilistic Fast File Fingerprinting (PFFF), exploits the variation present in biological data and computes file fingerprints by sampling randomly from the file instead of reading it in full. Consequently, it has a flat performance characteristic, correlated with data variation rather than file size. We demonstrate that probabilistic fingerprinting can be as reliable as existing hashing techniques, with provably negligible risk of collisions. We measure the performance of the algorithm on a number of data storage and access technologies, identifying its strengths as well as limitations. Conclusions Probabilistic fingerprinting may significantly reduce the use of computational resources when comparing very large files. Utilisation of probabilistic fingerprinting techniques can increase the speed of common file-related workflows, both in the data center and for workbench analysis. The implementation of the algorithm is available as an open-source tool named pfff, as a command-line tool as well as a C library. The tool can be downloaded from http://biit.cs.ut.ee/pfff. PMID:23445565

    • The Chopthin Algorithm for Resampling

      NASA Astrophysics Data System (ADS)

      Gandy, Axel; Lau, F. Din-Houn

      2016-08-01

      Resampling is a standard step in particle filters and more generally sequential Monte Carlo methods. We present an algorithm, called chopthin, for resampling weighted particles. In contrast to standard resampling methods the algorithm does not produce a set of equally weighted particles; instead it merely enforces an upper bound on the ratio between the weights. Simulation studies show that the chopthin algorithm consistently outperforms standard resampling methods. The algorithms chops up particles with large weight and thins out particles with low weight, hence its name. It implicitly guarantees a lower bound on the effective sample size. The algorithm can be implemented efficiently, making it practically useful. We show that the expected computational effort is linear in the number of particles. Implementations for C++, R (on CRAN), Python and Matlab are available.

    • Key Generation for Fast Inversion of the Paillier Encryption Function

      NASA Astrophysics Data System (ADS)

      Hirano, Takato; Tanaka, Keisuke

      We study fast inversion of the Paillier encryption function. Especially, we focus only on key generation, and do not modify the Paillier encryption function. We propose three key generation algorithms based on the speeding-up techniques for the RSA encryption function. By using our algorithms, the size of the private CRT exponent is half of that of Paillier-CRT. The first algorithm employs the extended Euclidean algorithm. The second algorithm employs factoring algorithms, and can construct the private CRT exponent with low Hamming weight. The third algorithm is a variant of the second one, and has some advantage such as compression of the private CRT exponent and no requirement for factoring algorithms. We also propose the settings of the parameters for these algorithms and analyze the security of the Paillier encryption function by these algorithms against known attacks. Finally, we give experimental results of our algorithms.

    • Implementation and parallelization of fast matrix multiplication for a fast Legendre transform

      SciTech Connect

      Chen, Wentao

      1993-09-01

      An algorithm was presented by Alpert and Rokhlin for the rapid evaluation of Legendre transforms. The fast algorithm can be expressed as a matrix-vector product followed by a fast cosine transform. Using the Chebyshev expansion to approximate the entries of the matrix and exchanging the order of summations reduces the time complexity of computation from O(n{sup 2}) to O(n log n), where n is the size of the input vector. Our work has been focused on the implementation and the parallelization of the fast algorithm of matrix-vector product. Results have shown the expected performance of the algorithm. Precision problems which arise as n becomes large can be resolved by doubling the precision of the calculation.

    • 76 FR 17934 - Infrastructure Protection Data Call

      Federal Register 2010, 2011, 2012, 2013, 2014

      2011-03-31

      ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HOMELAND SECURITY Infrastructure Protection Data Call AGENCY: National Protection and Programs Directorate, DHS...: Infrastructure Protection Data Call. OMB Number: 1670-NEW. Frequency: On occasion. Affected Public:...

    • Potential Paradigms and Possible Problems for CALL.

      ERIC Educational Resources Information Center

      Phillips, Martin

      1987-01-01

      Describes three models of CALL (computer assisted language learning) activity--games, the expert system, and the prosthetic approaches. A case is made for CALL development within a more instrumental view of the role of computers. (Author/CB)