CCOMP: An efficient algorithm for complex roots computation of determinantal equations
NASA Astrophysics Data System (ADS)
Zouros, Grigorios P.
2018-01-01
In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.
On the development of efficient algorithms for three dimensional fluid flow
NASA Technical Reports Server (NTRS)
Maccormack, R. W.
1988-01-01
The difficulties of constructing efficient algorithms for three-dimensional flow are discussed. Reasonable candidates are analyzed and tested, and most are found to have obvious shortcomings. Yet, there is promise that an efficient class of algorithms exist between the severely time-step sized-limited explicit or approximately factored algorithms and the computationally intensive direct inversion of large sparse matrices by Gaussian elimination.
An efficient and accurate 3D displacements tracking strategy for digital volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles
2014-07-01
Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.
Fast Steerable Principal Component Analysis
Zhao, Zhizhen; Shkolnisky, Yoel; Singer, Amit
2016-01-01
Cryo-electron microscopy nowadays often requires the analysis of hundreds of thousands of 2-D images as large as a few hundred pixels in each direction. Here, we introduce an algorithm that efficiently and accurately performs principal component analysis (PCA) for a large set of 2-D images, and, for each image, the set of its uniform rotations in the plane and their reflections. For a dataset consisting of n images of size L × L pixels, the computational complexity of our algorithm is O(nL3 + L4), while existing algorithms take O(nL4). The new algorithm computes the expansion coefficients of the images in a Fourier–Bessel basis efficiently using the nonuniform fast Fourier transform. We compare the accuracy and efficiency of the new algorithm with traditional PCA and existing algorithms for steerable PCA. PMID:27570801
Novel and efficient tag SNPs selection algorithms.
Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2014-01-01
SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.
Computationally efficient multibody simulations
NASA Technical Reports Server (NTRS)
Ramakrishnan, Jayant; Kumar, Manoj
1994-01-01
Computationally efficient approaches to the solution of the dynamics of multibody systems are presented in this work. The computational efficiency is derived from both the algorithmic and implementational standpoint. Order(n) approaches provide a new formulation of the equations of motion eliminating the assembly and numerical inversion of a system mass matrix as required by conventional algorithms. Computational efficiency is also gained in the implementation phase by the symbolic processing and parallel implementation of these equations. Comparison of this algorithm with existing multibody simulation programs illustrates the increased computational efficiency.
An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System
NASA Astrophysics Data System (ADS)
Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed
PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.
A novel minimum cost maximum power algorithm for future smart home energy management.
Singaravelan, A; Kowsalya, M
2017-11-01
With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP) problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP) algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.
Signal Partitioning Algorithm for Highly Efficient Gaussian Mixture Modeling in Mass Spectrometry
Polanski, Andrzej; Marczyk, Michal; Pietrowska, Monika; Widlak, Piotr; Polanska, Joanna
2015-01-01
Mixture - modeling of mass spectra is an approach with many potential applications including peak detection and quantification, smoothing, de-noising, feature extraction and spectral signal compression. However, existing algorithms do not allow for automated analyses of whole spectra. Therefore, despite highlighting potential advantages of mixture modeling of mass spectra of peptide/protein mixtures and some preliminary results presented in several papers, the mixture modeling approach was so far not developed to the stage enabling systematic comparisons with existing software packages for proteomic mass spectra analyses. In this paper we present an efficient algorithm for Gaussian mixture modeling of proteomic mass spectra of different types (e.g., MALDI-ToF profiling, MALDI-IMS). The main idea is automated partitioning of protein mass spectral signal into fragments. The obtained fragments are separately decomposed into Gaussian mixture models. The parameters of the mixture models of fragments are then aggregated to form the mixture model of the whole spectrum. We compare the elaborated algorithm to existing algorithms for peak detection and we demonstrate improvements of peak detection efficiency obtained by using Gaussian mixture modeling. We also show applications of the elaborated algorithm to real proteomic datasets of low and high resolution. PMID:26230717
Regional-scale calculation of the LS factor using parallel processing
NASA Astrophysics Data System (ADS)
Liu, Kai; Tang, Guoan; Jiang, Ling; Zhu, A.-Xing; Yang, Jianyi; Song, Xiaodong
2015-05-01
With the increase of data resolution and the increasing application of USLE over large areas, the existing serial implementation of algorithms for computing the LS factor is becoming a bottleneck. In this paper, a parallel processing model based on message passing interface (MPI) is presented for the calculation of the LS factor, so that massive datasets at a regional scale can be processed efficiently. The parallel model contains algorithms for calculating flow direction, flow accumulation, drainage network, slope, slope length and the LS factor. According to the existence of data dependence, the algorithms are divided into local algorithms and global algorithms. Parallel strategy are designed according to the algorithm characters including the decomposition method for maintaining the integrity of the results, optimized workflow for reducing the time taken for exporting the unnecessary intermediate data and a buffer-communication-computation strategy for improving the communication efficiency. Experiments on a multi-node system show that the proposed parallel model allows efficient calculation of the LS factor at a regional scale with a massive dataset.
Tai, David; Fang, Jianwen
2012-08-27
The large sizes of today's chemical databases require efficient algorithms to perform similarity searches. It can be very time consuming to compare two large chemical databases. This paper seeks to build upon existing research efforts by describing a novel strategy for accelerating existing search algorithms for comparing large chemical collections. The quest for efficiency has focused on developing better indexing algorithms by creating heuristics for searching individual chemical against a chemical library by detecting and eliminating needless similarity calculations. For comparing two chemical collections, these algorithms simply execute searches for each chemical in the query set sequentially. The strategy presented in this paper achieves a speedup upon these algorithms by indexing the set of all query chemicals so redundant calculations that arise in the case of sequential searches are eliminated. We implement this novel algorithm by developing a similarity search program called Symmetric inDexing or SymDex. SymDex shows over a 232% maximum speedup compared to the state-of-the-art single query search algorithm over real data for various fingerprint lengths. Considerable speedup is even seen for batch searches where query set sizes are relatively small compared to typical database sizes. To the best of our knowledge, SymDex is the first search algorithm designed specifically for comparing chemical libraries. It can be adapted to most, if not all, existing indexing algorithms and shows potential for accelerating future similarity search algorithms for comparing chemical databases.
Biological network motif detection and evaluation
2011-01-01
Background Molecular level of biological data can be constructed into system level of data as biological networks. Network motifs are defined as over-represented small connected subgraphs in networks and they have been used for many biological applications. Since network motif discovery involves computationally challenging processes, previous algorithms have focused on computational efficiency. However, we believe that the biological quality of network motifs is also very important. Results We define biological network motifs as biologically significant subgraphs and traditional network motifs are differentiated as structural network motifs in this paper. We develop five algorithms, namely, EDGEGO-BNM, EDGEBETWEENNESS-BNM, NMF-BNM, NMFGO-BNM and VOLTAGE-BNM, for efficient detection of biological network motifs, and introduce several evaluation measures including motifs included in complex, motifs included in functional module and GO term clustering score in this paper. Experimental results show that EDGEGO-BNM and EDGEBETWEENNESS-BNM perform better than existing algorithms and all of our algorithms are applicable to find structural network motifs as well. Conclusion We provide new approaches to finding network motifs in biological networks. Our algorithms efficiently detect biological network motifs and further improve existing algorithms to find high quality structural network motifs, which would be impossible using existing algorithms. The performances of the algorithms are compared based on our new evaluation measures in biological contexts. We believe that our work gives some guidelines of network motifs research for the biological networks. PMID:22784624
The admissible portfolio selection problem with transaction costs and an improved PSO algorithm
NASA Astrophysics Data System (ADS)
Chen, Wei; Zhang, Wei-Guo
2010-05-01
In this paper, we discuss the portfolio selection problem with transaction costs under the assumption that there exist admissible errors on expected returns and risks of assets. We propose a new admissible efficient portfolio selection model and design an improved particle swarm optimization (PSO) algorithm because traditional optimization algorithms fail to work efficiently for our proposed problem. Finally, we offer a numerical example to illustrate the proposed effective approaches and compare the admissible portfolio efficient frontiers under different constraints.
Efficient option valuation of single and double barrier options
NASA Astrophysics Data System (ADS)
Kabaivanov, Stanimir; Milev, Mariyan; Koleva-Petkova, Dessislava; Vladev, Veselin
2017-12-01
In this paper we present an implementation of pricing algorithm for single and double barrier options using Mellin transformation with Maximum Entropy Inversion and its suitability for real-world applications. A detailed analysis of the applied algorithm is accompanied by implementation in C++ that is then compared to existing solutions in terms of efficiency and computational power. We then compare the applied method with existing closed-form solutions and well known methods of pricing barrier options that are based on finite differences.
A High Fuel Consumption Efficiency Management Scheme for PHEVs Using an Adaptive Genetic Algorithm
Lee, Wah Ching; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei; Wu, Chung Kit; Chui, Kwok Tai; Lau, Wing Hong; Leung, Yat Wah
2015-01-01
A high fuel efficiency management scheme for plug-in hybrid electric vehicles (PHEVs) has been developed. In order to achieve fuel consumption reduction, an adaptive genetic algorithm scheme has been designed to adaptively manage the energy resource usage. The objective function of the genetic algorithm is implemented by designing a fuzzy logic controller which closely monitors and resembles the driving conditions and environment of PHEVs, thus trading off between petrol versus electricity for optimal driving efficiency. Comparison between calculated results and publicized data shows that the achieved efficiency of the fuzzified genetic algorithm is better by 10% than existing schemes. The developed scheme, if fully adopted, would help reduce over 600 tons of CO2 emissions worldwide every day. PMID:25587974
Learning Behavior Characterization with Multi-Feature, Hierarchical Activity Sequences
ERIC Educational Resources Information Center
Ye, Cheng; Segedy, James R.; Kinnebrew, John S.; Biswas, Gautam
2015-01-01
This paper discusses Multi-Feature Hierarchical Sequential Pattern Mining, MFH-SPAM, a novel algorithm that efficiently extracts patterns from students' learning activity sequences. This algorithm extends an existing sequential pattern mining algorithm by dynamically selecting the level of specificity for hierarchically-defined features…
Hwang, I-Shyan
2017-01-01
The K-coverage configuration that guarantees coverage of each location by at least K sensors is highly popular and is extensively used to monitor diversified applications in wireless sensor networks. Long network lifetime and high detection quality are the essentials of such K-covered sleep-scheduling algorithms. However, the existing sleep-scheduling algorithms either cause high cost or cannot preserve the detection quality effectively. In this paper, the Pre-Scheduling-based K-coverage Group Scheduling (PSKGS) and Self-Organized K-coverage Scheduling (SKS) algorithms are proposed to settle the problems in the existing sleep-scheduling algorithms. Simulation results show that our pre-scheduled-based KGS approach enhances the detection quality and network lifetime, whereas the self-organized-based SKS algorithm minimizes the computation and communication cost of the nodes and thereby is energy efficient. Besides, SKS outperforms PSKGS in terms of network lifetime and detection quality as it is self-organized. PMID:29257078
Efficient clustering aggregation based on data fragments.
Wu, Ou; Hu, Weiming; Maybank, Stephen J; Zhu, Mingliang; Li, Bing
2012-06-01
Clustering aggregation, known as clustering ensembles, has emerged as a powerful technique for combining different clustering results to obtain a single better clustering. Existing clustering aggregation algorithms are applied directly to data points, in what is referred to as the point-based approach. The algorithms are inefficient if the number of data points is large. We define an efficient approach for clustering aggregation based on data fragments. In this fragment-based approach, a data fragment is any subset of the data that is not split by any of the clustering results. To establish the theoretical bases of the proposed approach, we prove that clustering aggregation can be performed directly on data fragments under two widely used goodness measures for clustering aggregation taken from the literature. Three new clustering aggregation algorithms are described. The experimental results obtained using several public data sets show that the new algorithms have lower computational complexity than three well-known existing point-based clustering aggregation algorithms (Agglomerative, Furthest, and LocalSearch); nevertheless, the new algorithms do not sacrifice the accuracy.
Optimized atom position and coefficient coding for matching pursuit-based image compression.
Shoa, Alireza; Shirani, Shahram
2009-12-01
In this paper, we propose a new encoding algorithm for matching pursuit image coding. We show that coding performance is improved when correlations between atom positions and atom coefficients are both used in encoding. We find the optimum tradeoff between efficient atom position coding and efficient atom coefficient coding and optimize the encoder parameters. Our proposed algorithm outperforms the existing coding algorithms designed for matching pursuit image coding. Additionally, we show that our algorithm results in better rate distortion performance than JPEG 2000 at low bit rates.
TEAM: efficient two-locus epistasis tests in human genome-wide association study.
Zhang, Xiang; Huang, Shunping; Zou, Fei; Wang, Wei
2010-06-15
As a promising tool for identifying genetic markers underlying phenotypic differences, genome-wide association study (GWAS) has been extensively investigated in recent years. In GWAS, detecting epistasis (or gene-gene interaction) is preferable over single locus study since many diseases are known to be complex traits. A brute force search is infeasible for epistasis detection in the genome-wide scale because of the intensive computational burden. Existing epistasis detection algorithms are designed for dataset consisting of homozygous markers and small sample size. In human study, however, the genotype may be heterozygous, and number of individuals can be up to thousands. Thus, existing methods are not readily applicable to human datasets. In this article, we propose an efficient algorithm, TEAM, which significantly speeds up epistasis detection for human GWAS. Our algorithm is exhaustive, i.e. it does not ignore any epistatic interaction. Utilizing the minimum spanning tree structure, the algorithm incrementally updates the contingency tables for epistatic tests without scanning all individuals. Our algorithm has broader applicability and is more efficient than existing methods for large sample study. It supports any statistical test that is based on contingency tables, and enables both family-wise error rate and false discovery rate controlling. Extensive experiments show that our algorithm only needs to examine a small portion of the individuals to update the contingency tables, and it achieves at least an order of magnitude speed up over the brute force approach.
NASA Astrophysics Data System (ADS)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; Govind, Niranjan; Yang, Chao
2017-12-01
We present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.
Improved Ant Colony Clustering Algorithm and Its Performance Study
Gao, Wei
2016-01-01
Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533
Efficient greedy algorithms for economic manpower shift planning
NASA Astrophysics Data System (ADS)
Nearchou, A. C.; Giannikos, I. C.; Lagodimos, A. G.
2015-01-01
Consideration is given to the economic manpower shift planning (EMSP) problem, an NP-hard capacity planning problem appearing in various industrial settings including the packing stage of production in process industries and maintenance operations. EMSP aims to determine the manpower needed in each available workday shift of a given planning horizon so as to complete a set of independent jobs at minimum cost. Three greedy heuristics are presented for the EMSP solution. These practically constitute adaptations of an existing algorithm for a simplified version of EMSP which had shown excellent performance in terms of solution quality and speed. Experimentation shows that the new algorithms perform very well in comparison to the results obtained by both the CPLEX optimizer and an existing metaheuristic. Statistical analysis is deployed to rank the algorithms in terms of their solution quality and to identify the effects that critical planning factors may have on their relative efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
We present two efficient iterative algorithms for solving the linear response eigen- value problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into a product eigenvalue problem that is self-adjoint with respect to a K-inner product. This product eigenvalue problem can be solved efficiently by a modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-innermore » product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. However, the other component of the eigenvector can be easily recovered in a postprocessing procedure. Therefore, the algorithms we present here are more efficient than existing algorithms that try to approximate both components of the eigenvectors simultaneously. The efficiency of the new algorithms is demonstrated by numerical examples.« less
Azad, Ariful; Buluç, Aydın
2016-05-16
We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations,more » these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms. We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.« less
A Polynomial Time, Numerically Stable Integer Relation Algorithm
NASA Technical Reports Server (NTRS)
Ferguson, Helaman R. P.; Bailey, Daivd H.; Kutler, Paul (Technical Monitor)
1998-01-01
Let x = (x1, x2...,xn be a vector of real numbers. X is said to possess an integer relation if there exist integers a(sub i) not all zero such that a1x1 + a2x2 + ... a(sub n)Xn = 0. Beginning in 1977 several algorithms (with proofs) have been discovered to recover the a(sub i) given x. The most efficient of these existing integer relation algorithms (in terms of run time and the precision required of the input) has the drawback of being very unstable numerically. It often requires a numeric precision level in the thousands of digits to reliably recover relations in modest-sized test problems. We present here a new algorithm for finding integer relations, which we have named the "PSLQ" algorithm. It is proved in this paper that the PSLQ algorithm terminates with a relation in a number of iterations that is bounded by a polynomial in it. Because this algorithm employs a numerically stable matrix reduction procedure, it is free from the numerical difficulties, that plague other integer relation algorithms. Furthermore, its stability admits an efficient implementation with lower run times oil average than other algorithms currently in Use. Finally, this stability can be used to prove that relation bounds obtained from computer runs using this algorithm are numerically accurate.
Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm
NASA Astrophysics Data System (ADS)
Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.
2017-12-01
Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
Within this paper, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. Additionally, the solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue
In this article, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; ...
2017-12-01
In this article, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. The solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Vecharynski, Eugene; Brabec, Jiri; Shao, Meiyue; ...
2017-08-24
Within this paper, we present two efficient iterative algorithms for solving the linear response eigenvalue problem arising from the time dependent density functional theory. Although the matrix to be diagonalized is nonsymmetric, it has a special structure that can be exploited to save both memory and floating point operations. In particular, the nonsymmetric eigenvalue problem can be transformed into an eigenvalue problem that involves the product of two matrices M and K. We show that, because MK is self-adjoint with respect to the inner product induced by the matrix K, this product eigenvalue problem can be solved efficiently by amore » modified Davidson algorithm and a modified locally optimal block preconditioned conjugate gradient (LOBPCG) algorithm that make use of the K-inner product. Additionally, the solution of the product eigenvalue problem yields one component of the eigenvector associated with the original eigenvalue problem. We show that the other component of the eigenvector can be easily recovered in an inexpensive postprocessing procedure. As a result, the algorithms we present here become more efficient than existing methods that try to approximate both components of the eigenvectors simultaneously. In particular, our numerical experiments demonstrate that the new algorithms presented here consistently outperform the existing state-of-the-art Davidson type solvers by a factor of two in both solution time and storage.« less
Spatio-temporal colour correction of strongly degraded movies
NASA Astrophysics Data System (ADS)
Islam, A. B. M. Tariqul; Farup, Ivar
2011-01-01
The archives of motion pictures represent an important part of precious cultural heritage. Unfortunately, these cinematography collections are vulnerable to different distortions such as colour fading which is beyond the capability of photochemical restoration process. Spatial colour algorithms-Retinex and ACE provide helpful tool in restoring strongly degraded colour films but, there are some challenges associated with these algorithms. We present an automatic colour correction technique for digital colour restoration of strongly degraded movie material. The method is based upon the existing STRESS algorithm. In order to cope with the problem of highly correlated colour channels, we implemented a preprocessing step in which saturation enhancement is performed in a PCA space. Spatial colour algorithms tend to emphasize all details in the images, including dust and scratches. Surprisingly, we found that the presence of these defects does not affect the behaviour of the colour correction algorithm. Although the STRESS algorithm is already in itself more efficient than traditional spatial colour algorithms, it is still computationally expensive. To speed it up further, we went beyond the spatial domain of the frames and extended the algorithm to the temporal domain. This way, we were able to achieve an 80 percent reduction of the computational time compared to processing every single frame individually. We performed two user experiments and found that the visual quality of the resulting frames was significantly better than with existing methods. Thus, our method outperforms the existing ones in terms of both visual quality and computational efficiency.
An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks
Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen
2016-01-01
The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001
NASA Astrophysics Data System (ADS)
Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.
2017-08-01
Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.
A New Algorithm Using the Non-Dominated Tree to Improve Non-Dominated Sorting.
Gustavsson, Patrik; Syberfeldt, Anna
2018-01-01
Non-dominated sorting is a technique often used in evolutionary algorithms to determine the quality of solutions in a population. The most common algorithm is the Fast Non-dominated Sort (FNS). This algorithm, however, has the drawback that its performance deteriorates when the population size grows. The same drawback applies also to other non-dominating sorting algorithms such as the Efficient Non-dominated Sort with Binary Strategy (ENS-BS). An algorithm suggested to overcome this drawback is the Divide-and-Conquer Non-dominated Sort (DCNS) which works well on a limited number of objectives but deteriorates when the number of objectives grows. This article presents a new, more efficient algorithm called the Efficient Non-dominated Sort with Non-Dominated Tree (ENS-NDT). ENS-NDT is an extension of the ENS-BS algorithm and uses a novel Non-Dominated Tree (NDTree) to speed up the non-dominated sorting. ENS-NDT is able to handle large population sizes and a large number of objectives more efficiently than existing algorithms for non-dominated sorting. In the article, it is shown that with ENS-NDT the runtime of multi-objective optimization algorithms such as the Non-Dominated Sorting Genetic Algorithm II (NSGA-II) can be substantially reduced.
DOT National Transportation Integrated Search
2012-12-01
Backcalculation of pavement moduli has been an intensively researched subject for more than four decades. Despite the existence of many backcalculation programs employing different backcalculation procedures and algorithms, accurate inverse of the la...
Apriori Versions Based on MapReduce for Mining Frequent Patterns on Big Data.
Luna, Jose Maria; Padillo, Francisco; Pechenizkiy, Mykola; Ventura, Sebastian
2017-09-27
Pattern mining is one of the most important tasks to extract meaningful and useful information from raw data. This task aims to extract item-sets that represent any type of homogeneity and regularity in data. Although many efficient algorithms have been developed in this regard, the growing interest in data has caused the performance of existing pattern mining techniques to be dropped. The goal of this paper is to propose new efficient pattern mining algorithms to work in big data. To this aim, a series of algorithms based on the MapReduce framework and the Hadoop open-source implementation have been proposed. The proposed algorithms can be divided into three main groups. First, two algorithms [Apriori MapReduce (AprioriMR) and iterative AprioriMR] with no pruning strategy are proposed, which extract any existing item-set in data. Second, two algorithms (space pruning AprioriMR and top AprioriMR) that prune the search space by means of the well-known anti-monotone property are proposed. Finally, a last algorithm (maximal AprioriMR) is also proposed for mining condensed representations of frequent patterns. To test the performance of the proposed algorithms, a varied collection of big data datasets have been considered, comprising up to 3 · 10#x00B9;⁸ transactions and more than 5 million of distinct single-items. The experimental stage includes comparisons against highly efficient and well-known pattern mining algorithms. Results reveal the interest of applying MapReduce versions when complex problems are considered, and also the unsuitability of this paradigm when dealing with small data.
Efficiently computing exact geodesic loops within finite steps.
Xin, Shi-Qing; He, Ying; Fu, Chi-Wing
2012-06-01
Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.
Predictive Cache Modeling and Analysis
2011-11-01
metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing
An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals
ERIC Educational Resources Information Center
Verhelst, Norman D.
2008-01-01
Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…
NASA Astrophysics Data System (ADS)
Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena
2017-02-01
In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.
Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks.
Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid
2017-10-09
The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms.
Joint Power Charging and Routing in Wireless Rechargeable Sensor Networks
Jia, Jie; Chen, Jian; Deng, Yansha; Wang, Xingwei; Aghvami, Abdol-Hamid
2017-01-01
The development of wireless power transfer (WPT) technology has inspired the transition from traditional battery-based wireless sensor networks (WSNs) towards wireless rechargeable sensor networks (WRSNs). While extensive efforts have been made to improve charging efficiency, little has been done for routing optimization. In this work, we present a joint optimization model to maximize both charging efficiency and routing structure. By analyzing the structure of the optimization model, we first decompose the problem and propose a heuristic algorithm to find the optimal charging efficiency for the predefined routing tree. Furthermore, by coding the many-to-one communication topology as an individual, we further propose to apply a genetic algorithm (GA) for the joint optimization of both routing and charging. The genetic operations, including tree-based recombination and mutation, are proposed to obtain a fast convergence. Our simulation results show that the heuristic algorithm reduces the number of resident locations and the total moving distance. We also show that our proposed algorithm achieves a higher charging efficiency compared with existing algorithms. PMID:28991200
An effective and efficient compression algorithm for ECG signals with irregular periods.
Chou, Hsiao-Hsuan; Chen, Ying-Jui; Shiau, Yu-Chien; Kuo, Te-Son
2006-06-01
This paper presents an effective and efficient preprocessing algorithm for two-dimensional (2-D) electrocardiogram (ECG) compression to better compress irregular ECG signals by exploiting their inter- and intra-beat correlations. To better reveal the correlation structure, we first convert the ECG signal into a proper 2-D representation, or image. This involves a few steps including QRS detection and alignment, period sorting, and length equalization. The resulting 2-D ECG representation is then ready to be compressed by an appropriate image compression algorithm. We choose the state-of-the-art JPEG2000 for its high efficiency and flexibility. In this way, the proposed algorithm is shown to outperform some existing arts in the literature by simultaneously achieving high compression ratio (CR), low percent root mean squared difference (PRD), low maximum error (MaxErr), and low standard derivation of errors (StdErr). In particular, because the proposed period sorting method rearranges the detected heartbeats into a smoother image that is easier to compress, this algorithm is insensitive to irregular ECG periods. Thus either the irregular ECG signals or the QRS false-detection cases can be better compressed. This is a significant improvement over existing 2-D ECG compression methods. Moreover, this algorithm is not tied exclusively to JPEG2000. It can also be combined with other 2-D preprocessing methods or appropriate codecs to enhance the compression performance in irregular ECG cases.
An Efficient Next Hop Selection Algorithm for Multi-Hop Body Area Networks
Ayatollahitafti, Vahid; Ngadi, Md Asri; Mohamad Sharif, Johan bin; Abdullahi, Mohammed
2016-01-01
Body Area Networks (BANs) consist of various sensors which gather patient’s vital signs and deliver them to doctors. One of the most significant challenges faced, is the design of an energy-efficient next hop selection algorithm to satisfy Quality of Service (QoS) requirements for different healthcare applications. In this paper, a novel efficient next hop selection algorithm is proposed in multi-hop BANs. This algorithm uses the minimum hop count and a link cost function jointly in each node to choose the best next hop node. The link cost function includes the residual energy, free buffer size, and the link reliability of the neighboring nodes, which is used to balance the energy consumption and to satisfy QoS requirements in terms of end to end delay and reliability. Extensive simulation experiments were performed to evaluate the efficiency of the proposed algorithm using the NS-2 simulator. Simulation results show that our proposed algorithm provides significant improvement in terms of energy consumption, number of packets forwarded, end to end delay and packet delivery ratio compared to the existing routing protocol. PMID:26771586
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
EMILiO: a fast algorithm for genome-scale strain design.
Yang, Laurence; Cluett, William R; Mahadevan, Radhakrishnan
2011-05-01
Systems-level design of cell metabolism is becoming increasingly important for renewable production of fuels, chemicals, and drugs. Computational models are improving in the accuracy and scope of predictions, but are also growing in complexity. Consequently, efficient and scalable algorithms are increasingly important for strain design. Previous algorithms helped to consolidate the utility of computational modeling in this field. To meet intensifying demands for high-performance strains, both the number and variety of genetic manipulations involved in strain construction are increasing. Existing algorithms have experienced combinatorial increases in computational complexity when applied toward the design of such complex strains. Here, we present EMILiO, a new algorithm that increases the scope of strain design to include reactions with individually optimized fluxes. Unlike existing approaches that would experience an explosion in complexity to solve this problem, we efficiently generated numerous alternate strain designs producing succinate, l-glutamate and l-serine. This was enabled by successive linear programming, a technique new to the area of computational strain design. Copyright © 2011 Elsevier Inc. All rights reserved.
On the Complexity of Duplication-Transfer-Loss Reconciliation with Non-Binary Gene Trees.
Kordi, Misagh; Bansal, Mukul S
2017-01-01
Duplication-Transfer-Loss (DTL) reconciliation has emerged as a powerful technique for studying gene family evolution in the presence of horizontal gene transfer. DTL reconciliation takes as input a gene family phylogeny and the corresponding species phylogeny, and reconciles the two by postulating speciation, gene duplication, horizontal gene transfer, and gene loss events. Efficient algorithms exist for finding optimal DTL reconciliations when the gene tree is binary. However, gene trees are frequently non-binary. With such non-binary gene trees, the reconciliation problem seeks to find a binary resolution of the gene tree that minimizes the reconciliation cost. Given the prevalence of non-binary gene trees, many efficient algorithms have been developed for this problem in the context of the simpler Duplication-Loss (DL) reconciliation model. Yet, no efficient algorithms exist for DTL reconciliation with non-binary gene trees and the complexity of the problem remains unknown. In this work, we resolve this open question by showing that the problem is, in fact, NP-hard. Our reduction applies to both the dated and undated formulations of DTL reconciliation. By resolving this long-standing open problem, this work will spur the development of both exact and heuristic algorithms for this important problem.
Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan
2014-10-01
It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning. PMID:29209191
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.
Sze, Sing-Hoi; Parrott, Jonathan J; Tarone, Aaron M
2017-12-06
While the continued development of high-throughput sequencing has facilitated studies of entire transcriptomes in non-model organisms, the incorporation of an increasing amount of RNA-Seq libraries has made de novo transcriptome assembly difficult. Although algorithms that can assemble a large amount of RNA-Seq data are available, they are generally very memory-intensive and can only be used to construct small assemblies. We develop a divide-and-conquer strategy that allows these algorithms to be utilized, by subdividing a large RNA-Seq data set into small libraries. Each individual library is assembled independently by an existing algorithm, and a merging algorithm is developed to combine these assemblies by picking a subset of high quality transcripts to form a large transcriptome. When compared to existing algorithms that return a single assembly directly, this strategy achieves comparable or increased accuracy as memory-efficient algorithms that can be used to process a large amount of RNA-Seq data, and comparable or decreased accuracy as memory-intensive algorithms that can only be used to construct small assemblies. Our divide-and-conquer strategy allows memory-intensive de novo transcriptome assembly algorithms to be utilized to construct large assemblies.
Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data
CHEN, SHUAI; ZHAO, HONGWEI
2013-01-01
Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869
Computationally efficient algorithm for Gaussian Process regression in case of structured samples
NASA Astrophysics Data System (ADS)
Belyaev, M.; Burnaev, E.; Kapushev, Y.
2016-04-01
Surrogate modeling is widely used in many engineering problems. Data sets often have Cartesian product structure (for instance factorial design of experiments with missing points). In such case the size of the data set can be very large. Therefore, one of the most popular algorithms for approximation-Gaussian Process regression-can be hardly applied due to its computational complexity. In this paper a computationally efficient approach for constructing Gaussian Process regression in case of data sets with Cartesian product structure is presented. Efficiency is achieved by using a special structure of the data set and operations with tensors. Proposed algorithm has low computational as well as memory complexity compared to existing algorithms. In this work we also introduce a regularization procedure allowing to take into account anisotropy of the data set and avoid degeneracy of regression model.
Queue and stack sorting algorithm optimization and performance analysis
NASA Astrophysics Data System (ADS)
Qian, Mingzhu; Wang, Xiaobao
2018-04-01
Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.
A cellular automata based FPGA realization of a new metaheuristic bat-inspired algorithm
NASA Astrophysics Data System (ADS)
Progias, Pavlos; Amanatiadis, Angelos A.; Spataro, William; Trunfio, Giuseppe A.; Sirakoulis, Georgios Ch.
2016-10-01
Optimization algorithms are often inspired by processes occuring in nature, such as animal behavioral patterns. The main concern with implementing such algorithms in software is the large amounts of processing power they require. In contrast to software code, that can only perform calculations in a serial manner, an implementation in hardware, exploiting the inherent parallelism of single-purpose processors, can prove to be much more efficient both in speed and energy consumption. Furthermore, the use of Cellular Automata (CA) in such an implementation would be efficient both as a model for natural processes, as well as a computational paradigm implemented well on hardware. In this paper, we propose a VHDL implementation of a metaheuristic algorithm inspired by the echolocation behavior of bats. More specifically, the CA model is inspired by the metaheuristic algorithm proposed earlier in the literature, which could be considered at least as efficient than other existing optimization algorithms. The function of the FPGA implementation of our algorithm is explained in full detail and results of our simulations are also demonstrated.
General entanglement-assisted transformation for bipartite pure quantum states
NASA Astrophysics Data System (ADS)
Song, Wei; Huang, Yan; Nai-LeLiu; Chen, Zeng-Bing
2007-01-01
We introduce the general catalysts for pure entanglement transformations under local operations and classical communications in such a way that we disregard the profit and loss of entanglement of the catalysts per se. As such, the possibilities of pure entanglement transformations are greatly expanded. We also design an efficient algorithm to detect whether a k × k general catalyst exists for a given entanglement transformation. This algorithm can also be exploited to witness the existence of standard catalysts.
SLIC superpixels compared to state-of-the-art superpixel methods.
Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine
2012-11-01
Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.
2012-01-01
Background Multi-target therapeutics has been shown to be effective for treating complex diseases, and currently, it is a common practice to combine multiple drugs to treat such diseases to optimize the therapeutic outcomes. However, considering the huge number of possible ways to mix multiple drugs at different concentrations, it is practically difficult to identify the optimal drug combination through exhaustive testing. Results In this paper, we propose a novel stochastic search algorithm, called the adaptive reference update (ARU) algorithm, that can provide an efficient and systematic way for optimizing multi-drug cocktails. The ARU algorithm iteratively updates the drug combination to improve its response, where the update is made by comparing the response of the current combination with that of a reference combination, based on which the beneficial update direction is predicted. The reference combination is continuously updated based on the drug response values observed in the past, thereby adapting to the underlying drug response function. To demonstrate the effectiveness of the proposed algorithm, we evaluated its performance based on various multi-dimensional drug functions and compared it with existing algorithms. Conclusions Simulation results show that the ARU algorithm significantly outperforms existing stochastic search algorithms, including the Gur Game algorithm. In fact, the ARU algorithm can more effectively identify potent drug combinations and it typically spends fewer iterations for finding effective combinations. Furthermore, the ARU algorithm is robust to random fluctuations and noise in the measured drug response, which makes the algorithm well-suited for practical drug optimization applications. PMID:23134742
An efficient non-dominated sorting method for evolutionary algorithms.
Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F
2008-01-01
We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.
Duality quantum algorithm efficiently simulates open quantum systems
Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu
2016-01-01
Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855
Compound Event Barrier Coverage in Wireless Sensor Networks under Multi-Constraint Conditions.
Zhuang, Yaoming; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi
2016-12-24
It is important to monitor compound event by barrier coverage issues in wireless sensor networks (WSNs). Compound event barrier coverage (CEBC) is a novel coverage problem. Unlike traditional ones, the data of compound event barrier coverage comes from different types of sensors. It will be subject to multiple constraints under complex conditions in real-world applications. The main objective of this paper is to design an efficient algorithm for complex conditions that can combine the compound event confidence. Moreover, a multiplier method based on an active-set strategy (ASMP) is proposed to optimize the multiple constraints in compound event barrier coverage. The algorithm can calculate the coverage ratio efficiently and allocate the sensor resources reasonably in compound event barrier coverage. The proposed algorithm can simplify complex problems to reduce the computational load of the network and improve the network efficiency. The simulation results demonstrate that the proposed algorithm is more effective and efficient than existing methods, especially in the allocation of sensor resources.
Compound Event Barrier Coverage in Wireless Sensor Networks under Multi-Constraint Conditions
Zhuang, Yaoming; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi
2016-01-01
It is important to monitor compound event by barrier coverage issues in wireless sensor networks (WSNs). Compound event barrier coverage (CEBC) is a novel coverage problem. Unlike traditional ones, the data of compound event barrier coverage comes from different types of sensors. It will be subject to multiple constraints under complex conditions in real-world applications. The main objective of this paper is to design an efficient algorithm for complex conditions that can combine the compound event confidence. Moreover, a multiplier method based on an active-set strategy (ASMP) is proposed to optimize the multiple constraints in compound event barrier coverage. The algorithm can calculate the coverage ratio efficiently and allocate the sensor resources reasonably in compound event barrier coverage. The proposed algorithm can simplify complex problems to reduce the computational load of the network and improve the network efficiency. The simulation results demonstrate that the proposed algorithm is more effective and efficient than existing methods, especially in the allocation of sensor resources. PMID:28029118
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework
Antonopoulos, Georgios C.; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.
Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.
An algorithm of discovering signatures from DNA databases on a computer cluster.
Lee, Hsiao Ping; Sheu, Tzu-Fang
2014-10-05
Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.
A novel swarm intelligence algorithm for finding DNA motifs.
Lei, Chengwei; Ruan, Jianhua
2009-01-01
Discovering DNA motifs from co-expressed or co-regulated genes is an important step towards deciphering complex gene regulatory networks and understanding gene functions. Despite significant improvement in the last decade, it still remains one of the most challenging problems in computational molecular biology. In this work, we propose a novel motif finding algorithm that finds consensus patterns using a population-based stochastic optimisation technique called Particle Swarm Optimisation (PSO), which has been shown to be effective in optimising difficult multidimensional problems in continuous domains. We propose to use a word dissimilarity graph to remap the neighborhood structure of the solution space of DNA motifs, and propose a modification of the naive PSO algorithm to accommodate discrete variables. In order to improve efficiency, we also propose several strategies for escaping from local optima and for automatically determining the termination criteria. Experimental results on simulated challenge problems show that our method is both more efficient and more accurate than several existing algorithms. Applications to several sets of real promoter sequences also show that our approach is able to detect known transcription factor binding sites, and outperforms two of the most popular existing algorithms.
Conditional Random Field-Based Offline Map Matching for Indoor Environments
Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram
2016-01-01
In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm. PMID:27537892
Conditional Random Field-Based Offline Map Matching for Indoor Environments.
Bataineh, Safaa; Bahillo, Alfonso; Díez, Luis Enrique; Onieva, Enrique; Bataineh, Ikram
2016-08-16
In this paper, we present an offline map matching technique designed for indoor localization systems based on conditional random fields (CRF). The proposed algorithm can refine the results of existing indoor localization systems and match them with the map, using loose coupling between the existing localization system and the proposed map matching technique. The purpose of this research is to investigate the efficiency of using the CRF technique in offline map matching problems for different scenarios and parameters. The algorithm was applied to several real and simulated trajectories of different lengths. The results were then refined and matched with the map using the CRF algorithm.
Storyline Visualization: A Compelling Way to Understand Patterns over Time and Space
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2017-10-16
Storyline visualization is a compelling way to understand patterns over time and space. Much effort has been spent developing efficient and aesthetically pleasing layout optimization algorithms. But what if those algorithms are optimizing the wrong things? To answer this question, we conducted a design study with different storyline layout algorithms. We found that users with our new design principles for storyline visualization outperform existing methods.
ROBNCA: robust network component analysis for recovering transcription factor activities.
Noor, Amina; Ahmad, Aitzaz; Serpedin, Erchin; Nounou, Mohamed; Nounou, Hazem
2013-10-01
Network component analysis (NCA) is an efficient method of reconstructing the transcription factor activity (TFA), which makes use of the gene expression data and prior information available about transcription factor (TF)-gene regulations. Most of the contemporary algorithms either exhibit the drawback of inconsistency and poor reliability, or suffer from prohibitive computational complexity. In addition, the existing algorithms do not possess the ability to counteract the presence of outliers in the microarray data. Hence, robust and computationally efficient algorithms are needed to enable practical applications. We propose ROBust Network Component Analysis (ROBNCA), a novel iterative algorithm that explicitly models the possible outliers in the microarray data. An attractive feature of the ROBNCA algorithm is the derivation of a closed form solution for estimating the connectivity matrix, which was not available in prior contributions. The ROBNCA algorithm is compared with FastNCA and the non-iterative NCA (NI-NCA). ROBNCA estimates the TF activity profiles as well as the TF-gene control strength matrix with a much higher degree of accuracy than FastNCA and NI-NCA, irrespective of varying noise, correlation and/or amount of outliers in case of synthetic data. The ROBNCA algorithm is also tested on Saccharomyces cerevisiae data and Escherichia coli data, and it is observed to outperform the existing algorithms. The run time of the ROBNCA algorithm is comparable with that of FastNCA, and is hundreds of times faster than NI-NCA. The ROBNCA software is available at http://people.tamu.edu/∼amina/ROBNCA
An Efficient Conflict Detection Algorithm for Packet Filters
NASA Astrophysics Data System (ADS)
Lee, Chun-Liang; Lin, Guan-Yu; Chen, Yaw-Chung
Packet classification is essential for supporting advanced network services such as firewalls, quality-of-service (QoS), virtual private networks (VPN), and policy-based routing. The rules that routers use to classify packets are called packet filters. If two or more filters overlap, a conflict occurs and leads to ambiguity in packet classification. This study proposes an algorithm that can efficiently detect and resolve filter conflicts using tuple based search. The time complexity of the proposed algorithm is O(nW+s), and the space complexity is O(nW), where n is the number of filters, W is the number of bits in a header field, and s is the number of conflicts. This study uses the synthetic filter databases generated by ClassBench to evaluate the proposed algorithm. Simulation results show that the proposed algorithm can achieve better performance than existing conflict detection algorithms both in time and space, particularly for databases with large numbers of conflicts.
Fragmenting networks by targeting collective influencers at a mesoscopic level.
Kobayashi, Teruyoshi; Masuda, Naoki
2016-11-25
A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure.
Fragmenting networks by targeting collective influencers at a mesoscopic level
NASA Astrophysics Data System (ADS)
Kobayashi, Teruyoshi; Masuda, Naoki
2016-11-01
A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure.
Fragmenting networks by targeting collective influencers at a mesoscopic level
Kobayashi, Teruyoshi; Masuda, Naoki
2016-01-01
A practical approach to protecting networks against epidemic processes such as spreading of infectious diseases, malware, and harmful viral information is to remove some influential nodes beforehand to fragment the network into small components. Because determining the optimal order to remove nodes is a computationally hard problem, various approximate algorithms have been proposed to efficiently fragment networks by sequential node removal. Morone and Makse proposed an algorithm employing the non-backtracking matrix of given networks, which outperforms various existing algorithms. In fact, many empirical networks have community structure, compromising the assumption of local tree-like structure on which the original algorithm is based. We develop an immunization algorithm by synergistically combining the Morone-Makse algorithm and coarse graining of the network in which we regard a community as a supernode. In this way, we aim to identify nodes that connect different communities at a reasonable computational cost. The proposed algorithm works more efficiently than the Morone-Makse and other algorithms on networks with community structure. PMID:27886251
NASA Technical Reports Server (NTRS)
Dinar, N.
1978-01-01
Several aspects of multigrid methods are briefly described. The main subjects include the development of very efficient multigrid algorithms for systems of elliptic equations (Cauchy-Riemann, Stokes, Navier-Stokes), as well as the development of control and prediction tools (based on local mode Fourier analysis), used to analyze, check and improve these algorithms. Preliminary research on multigrid algorithms for time dependent parabolic equations is also described. Improvements in existing multigrid processes and algorithms for elliptic equations were studied.
A reconsideration of negative ratings for network-based recommendation
NASA Astrophysics Data System (ADS)
Hu, Liang; Ren, Liang; Lin, Wenbin
2018-01-01
Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.
Hardware-efficient implementation of digital FIR filter using fast first-order moment algorithm
NASA Astrophysics Data System (ADS)
Cao, Li; Liu, Jianguo; Xiong, Jun; Zhang, Jing
2018-03-01
As the digital finite impulse response (FIR) filter can be transformed into the shift-add form of multiple small-sized firstorder moments, based on the existing fast first-order moment algorithm, this paper presents a novel multiplier-less structure to calculate any number of sequential filtering results in parallel. The theoretical analysis on its hardware and time-complexities reveals that by appropriately setting the degree of parallelism and the decomposition factor of a fixed word width, the proposed structure may achieve better area-time efficiency than the existing two-dimensional (2-D) memoryless-based filter. To evaluate the performance concretely, the proposed designs for different taps along with the existing 2-D memoryless-based filters, are synthesized by Synopsys Design Compiler with 0.18-μm SMIC library. The comparisons show that the proposed design has less area-time complexity and power consumption when the number of filter taps is larger than 48.
An efficient reversible privacy-preserving data mining technology over data streams.
Lin, Chen-Yi; Kao, Yuan-Hung; Lee, Wei-Bin; Chen, Rong-Chang
2016-01-01
With the popularity of smart handheld devices and the emergence of cloud computing, users and companies can save various data, which may contain private data, to the cloud. Topics relating to data security have therefore received much attention. This study focuses on data stream environments and uses the concept of a sliding window to design a reversible privacy-preserving technology to process continuous data in real time, known as a continuous reversible privacy-preserving (CRP) algorithm. Data with CRP algorithm protection can be accurately recovered through a data recovery process. In addition, by using an embedded watermark, the integrity of the data can be verified. The results from the experiments show that, compared to existing algorithms, CRP is better at preserving knowledge and is more effective in terms of reducing information loss and privacy disclosure risk. In addition, it takes far less time for CRP to process continuous data than existing algorithms. As a result, CRP is confirmed as suitable for data stream environments and fulfills the requirements of being lightweight and energy-efficient for smart handheld devices.
Yu, Qiang; Wei, Dingbang; Huo, Hongwei
2018-06-18
Given a set of t n-length DNA sequences, q satisfying 0 < q ≤ 1, and l and d satisfying 0 ≤ d < l < n, the quorum planted motif search (qPMS) finds l-length strings that occur in at least qt input sequences with up to d mismatches and is mainly used to locate transcription factor binding sites in DNA sequences. Existing qPMS algorithms have been able to efficiently process small standard datasets (e.g., t = 20 and n = 600), but they are too time consuming to process large DNA datasets, such as ChIP-seq datasets that contain thousands of sequences or more. We analyze the effects of t and q on the time performance of qPMS algorithms and find that a large t or a small q causes a longer computation time. Based on this information, we improve the time performance of existing qPMS algorithms by selecting a sample sequence set D' with a small t and a large q from the large input dataset D and then executing qPMS algorithms on D'. A sample sequence selection algorithm named SamSelect is proposed. The experimental results on both simulated and real data show (1) that SamSelect can select D' efficiently and (2) that the qPMS algorithms executed on D' can find implanted or real motifs in a significantly shorter time than when executed on D. We improve the ability of existing qPMS algorithms to process large DNA datasets from the perspective of selecting high-quality sample sequence sets so that the qPMS algorithms can find motifs in a short time in the selected sample sequence set D', rather than take an unfeasibly long time to search the original sequence set D. Our motif discovery method is an approximate algorithm.
Time-aware service-classified spectrum defragmentation algorithm for flex-grid optical networks
NASA Astrophysics Data System (ADS)
Qiu, Yang; Xu, Jing
2018-01-01
By employing sophisticated routing and spectrum assignment (RSA) algorithms together with a finer spectrum granularity (namely frequency slot) in resource allocation procedures, flex-grid optical networks can accommodate diverse kinds of services with high spectrum-allocation flexibility and resource-utilization efficiency. However, the continuity and the contiguity constraints in spectrum allocation procedures may always induce some isolated, small-sized, and unoccupied spectral blocks (known as spectrum fragments) in flex-grid optical networks. Although these spectrum fragments are left unoccupied, they can hardly be utilized by the subsequent service requests directly because of their spectral characteristics and the constraints in spectrum allocation. In this way, the existence of spectrum fragments may exhaust the available spectrum resources for a coming service request and thus worsens the networking performance. Therefore, many reactive defragmentation algorithms have been proposed to handle the fragmented spectrum resources via re-optimizing the routing paths and the spectrum resources for the existing services. But the routing-path and the spectrum-resource re-optimization in reactive defragmentation algorithms may possibly disrupt the traffic of the existing services and require extra components. By comparison, some proactive defragmentation algorithms (e.g. fragmentation-aware algorithms) were proposed to suppress spectrum fragments from their generation instead of handling the fragmented spectrum resources. Although these proactive defragmentation algorithms induced no traffic disruption and required no extra components, they always left the generated spectrum fragments unhandled, which greatly affected their efficiency in spectrum defragmentation. In this paper, by comprehensively considering the characteristics of both the reactive and the proactive defragmentation algorithms, we proposed a time-aware service-classified (TASC) spectrum defragmentation algorithm, which simultaneously employed proactive and reactive mechanisms in suppressing spectrum fragments with the awareness of services' types and their duration times. By dividing the spectrum resources into several flexible groups according to services' types and limiting both the spectrum allocation and the spectrum re-tuning for a certain service inside one specific spectrum group according to its type, the proposed TASC defragmentation algorithm cannot only suppress spectrum fragments from generation inside each spectrum group, but also handle the fragments generated between two adjacent groups. In this way, the proposed TASC algorithm gains higher efficiency in suppressing spectrum fragments than both the reactive and the proactive defragmentation algorithms. Additionally, as the generation of spectrum fragments is retrained between spectrum groups and the defragmentation procedure is limited inside each spectrum group, the induced traffic disruption for the existing services can be possibly reduced. Besides, the proposed TASC defragmentation algorithm always re-tunes the spectrum resources of the service with the maximum duration time first in spectrum defragmentation procedure, which can further reduce spectrum fragments because of the fact that the services with longer duration times always have higher possibility in inducing spectrum fragments than the services with shorter duration times. The simulation results show that the proposed TASC defragmentation algorithm can significantly reduce the number of the generated spectrum fragments while improving the service blocking performance.
An Orthogonal Evolutionary Algorithm With Learning Automata for Multiobjective Optimization.
Dai, Cai; Wang, Yuping; Ye, Miao; Xue, Xingsi; Liu, Hailin
2016-12-01
Research on multiobjective optimization problems becomes one of the hottest topics of intelligent computation. In order to improve the search efficiency of an evolutionary algorithm and maintain the diversity of solutions, in this paper, the learning automata (LA) is first used for quantization orthogonal crossover (QOX), and a new fitness function based on decomposition is proposed to achieve these two purposes. Based on these, an orthogonal evolutionary algorithm with LA for complex multiobjective optimization problems with continuous variables is proposed. The experimental results show that in continuous states, the proposed algorithm is able to achieve accurate Pareto-optimal sets and wide Pareto-optimal fronts efficiently. Moreover, the comparison with the several existing well-known algorithms: nondominated sorting genetic algorithm II, decomposition-based multiobjective evolutionary algorithm, decomposition-based multiobjective evolutionary algorithm with an ensemble of neighborhood sizes, multiobjective optimization by LA, and multiobjective immune algorithm with nondominated neighbor-based selection, on 15 multiobjective benchmark problems, shows that the proposed algorithm is able to find more accurate and evenly distributed Pareto-optimal fronts than the compared ones.
A Novel Latin Hypercube Algorithm via Translational Propagation
Pan, Guang; Ye, Pengcheng
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is directly related to the experimental designs used. Optimal Latin hypercube designs are frequently used and have been shown to have good space-filling and projective properties. However, the high cost in constructing them limits their use. In this paper, a methodology for creating novel Latin hypercube designs via translational propagation and successive local enumeration algorithm (TPSLE) is developed without using formal optimization. TPSLE algorithm is based on the inspiration that a near optimal Latin Hypercube design can be constructed by a simple initial block with a few points generated by algorithm SLE as a building block. In fact, TPSLE algorithm offers a balanced trade-off between the efficiency and sampling performance. The proposed algorithm is compared to two existing algorithms and is found to be much more efficient in terms of the computation time and has acceptable space-filling and projective properties. PMID:25276844
Bellman Ford algorithm - in Routing Information Protocol (RIP)
NASA Astrophysics Data System (ADS)
Krianto Sulaiman, Oris; Mahmud Siregar, Amir; Nasution, Khairuddin; Haramaini, Tasliyah
2018-04-01
In a large scale network need a routing that can handle a lot number of users, one of the solutions to cope with large scale network is by using a routing protocol, There are 2 types of routing protocol that is static and dynamic, Static routing is manually route input based on network admin, while dynamic routing is automatically route input formed based on existing network. Dynamic routing is efficient used to network extensively because of the input of route automatic formed, Routing Information Protocol (RIP) is one of dynamic routing that uses the bellman-ford algorithm where this algorithm will search for the best path that traversed the network by leveraging the value of each link, so with the bellman-ford algorithm owned by RIP can optimize existing networks.
Efficient solution for finding Hamilton cycles in undirected graphs.
Alhalabi, Wadee; Kitanneh, Omar; Alharbi, Amira; Balfakih, Zain; Sarirete, Akila
2016-01-01
The Hamilton cycle problem is closely related to a series of famous problems and puzzles (traveling salesman problem, Icosian game) and, due to the fact that it is NP-complete, it was extensively studied with different algorithms to solve it. The most efficient algorithm is not known. In this paper, a necessary condition for an arbitrary un-directed graph to have Hamilton cycle is proposed. Based on this condition, a mathematical solution for this problem is developed and several proofs and an algorithmic approach are introduced. The algorithm is successfully implemented on many Hamiltonian and non-Hamiltonian graphs. This provides a new effective approach to solve a problem that is fundamental in graph theory and can influence the manner in which the existing applications are used and improved.
Denoising of polychromatic CT images based on their own noise properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ji Hye; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr
Purpose: Because of high diagnostic accuracy and fast scan time, computed tomography (CT) has been widely used in various clinical applications. Since the CT scan introduces radiation exposure to patients, however, dose reduction has recently been recognized as an important issue in CT imaging. However, low-dose CT causes an increase of noise in the image and thereby deteriorates the accuracy of diagnosis. In this paper, the authors develop an efficient denoising algorithm for low-dose CT images obtained using a polychromatic x-ray source. The algorithm is based on two steps: (i) estimation of space variant noise statistics, which are uniquely determinedmore » according to the system geometry and scanned object, and (ii) subsequent novel conversion of the estimated noise to Gaussian noise so that an existing high performance Gaussian noise filtering algorithm can be directly applied to CT images with non-Gaussian noise. Methods: For efficient polychromatic CT image denoising, the authors first reconstruct an image with the iterative maximum-likelihood polychromatic algorithm for CT to alleviate the beam-hardening problem. We then estimate the space-variant noise variance distribution on the image domain. Since there are many high performance denoising algorithms available for the Gaussian noise, image denoising can become much more efficient if they can be used. Hence, the authors propose a novel conversion scheme to transform the estimated space-variant noise to near Gaussian noise. In the suggested scheme, the authors first convert the image so that its mean and variance can have a linear relationship, and then produce a Gaussian image via variance stabilizing transform. The authors then apply a block matching 4D algorithm that is optimized for noise reduction of the Gaussian image, and reconvert the result to obtain a final denoised image. To examine the performance of the proposed method, an XCAT phantom simulation and a physical phantom experiment were conducted. Results: Both simulation and experimental results show that, unlike the existing denoising algorithms, the proposed algorithm can effectively reduce the noise over the whole region of CT images while preventing degradation of image resolution. Conclusions: To effectively denoise polychromatic low-dose CT images, a novel denoising algorithm is proposed. Because this algorithm is based on the noise statistics of a reconstructed polychromatic CT image, the spatially varying noise on the image is effectively reduced so that the denoised image will have homogeneous quality over the image domain. Through a simulation and a real experiment, it is verified that the proposed algorithm can deliver considerably better performance compared to the existing denoising algorithms.« less
Zhang, Pan; Moore, Cristopher
2014-01-01
Modularity is a popular measure of community structure. However, maximizing the modularity can lead to many competing partitions, with almost the same modularity, that are poorly correlated with each other. It can also produce illusory ‘‘communities’’ in random graphs where none exist. We address this problem by using the modularity as a Hamiltonian at finite temperature and using an efficient belief propagation algorithm to obtain the consensus of many partitions with high modularity, rather than looking for a single partition that maximizes it. We show analytically and numerically that the proposed algorithm works all of the way down to the detectability transition in networks generated by the stochastic block model. It also performs well on real-world networks, revealing large communities in some networks where previous work has claimed no communities exist. Finally we show that by applying our algorithm recursively, subdividing communities until no statistically significant subcommunities can be found, we can detect hierarchical structure in real-world networks more efficiently than previous methods. PMID:25489096
Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.
Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan
2014-09-22
A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.
NASA Astrophysics Data System (ADS)
Bolodurina, I. P.; Parfenov, D. I.
2017-10-01
The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms. PMID:26473166
Job Scheduling with Efficient Resource Monitoring in Cloud Datacenter.
Loganathan, Shyamala; Mukherjee, Saswati
2015-01-01
Cloud computing is an on-demand computing model, which uses virtualization technology to provide cloud resources to users in the form of virtual machines through internet. Being an adaptable technology, cloud computing is an excellent alternative for organizations for forming their own private cloud. Since the resources are limited in these private clouds maximizing the utilization of resources and giving the guaranteed service for the user are the ultimate goal. For that, efficient scheduling is needed. This research reports on an efficient data structure for resource management and resource scheduling technique in a private cloud environment and discusses a cloud model. The proposed scheduling algorithm considers the types of jobs and the resource availability in its scheduling decision. Finally, we conducted simulations using CloudSim and compared our algorithm with other existing methods, like V-MCT and priority scheduling algorithms.
Constraint programming based biomarker optimization.
Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng
2015-01-01
Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.
Using learning automata to determine proper subset size in high-dimensional spaces
NASA Astrophysics Data System (ADS)
Seyyedi, Seyyed Hossein; Minaei-Bidgoli, Behrouz
2017-03-01
In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA's multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm's accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm's accuracy and the subset size, which determines the algorithm's efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.
Perfect blind restoration of images blurred by multiple filters: theory and efficient algorithms.
Harikumar, G; Bresler, Y
1999-01-01
We address the problem of restoring an image from its noisy convolutions with two or more unknown finite impulse response (FIR) filters. We develop theoretical results about the existence and uniqueness of solutions, and show that under some generically true assumptions, both the filters and the image can be determined exactly in the absence of noise, and stably estimated in its presence. We present efficient algorithms to estimate the blur functions and their sizes. These algorithms are of two types, subspace-based and likelihood-based, and are extensions of techniques proposed for the solution of the multichannel blind deconvolution problem in one dimension. We present memory and computation-efficient techniques to handle the very large matrices arising in the two-dimensional (2-D) case. Once the blur functions are determined, they are used in a multichannel deconvolution step to reconstruct the unknown image. The theoretical and practical implications of edge effects, and "weakly exciting" images are examined. Finally, the algorithms are demonstrated on synthetic and real data.
Vecharynski, Eugene; Yang, Chao; Pask, John E.
2015-02-25
Here, we present an iterative algorithm for computing an invariant subspace associated with the algebraically smallest eigenvalues of a large sparse or structured Hermitian matrix A. We are interested in the case in which the dimension of the invariant subspace is large (e.g., over several hundreds or thousands) even though it may still be small relative to the dimension of A. These problems arise from, for example, density functional theory (DFT) based electronic structure calculations for complex materials. The key feature of our algorithm is that it performs fewer Rayleigh–Ritz calculations compared to existing algorithms such as the locally optimalmore » block preconditioned conjugate gradient or the Davidson algorithm. It is a block algorithm, and hence can take advantage of efficient BLAS3 operations and be implemented with multiple levels of concurrency. We discuss a number of practical issues that must be addressed in order to implement the algorithm efficiently on a high performance computer.« less
NASA Technical Reports Server (NTRS)
Mitra, Debasis; Thomas, Ajai; Hemminger, Joseph; Sakowski, Barbara
2001-01-01
In this research we have developed an algorithm for the purpose of constraint processing by utilizing relational algebraic operators. Van Beek and others have investigated in the past this type of constraint processing from within a relational algebraic framework, producing some unique results. Apart from providing new theoretical angles, this approach also gives the opportunity to use the existing efficient implementations of relational database management systems as the underlying data structures for any relevant algorithm. Our algorithm here enhances that framework. The algorithm is quite general in its current form. Weak heuristics (like forward checking) developed within the Constraint-satisfaction problem (CSP) area could be also plugged easily within this algorithm for further enhancements of efficiency. The algorithm as developed here is targeted toward a component-oriented modeling problem that we are currently working on, namely, the problem of interactive modeling for batch-simulation of engineering systems (IMBSES). However, it could be adopted for many other CSP problems as well. The research addresses the algorithm and many aspects of the problem IMBSES that we are currently handling.
Rigorous RG Algorithms and Area Laws for Low Energy Eigenstates in 1D
NASA Astrophysics Data System (ADS)
Arad, Itai; Landau, Zeph; Vazirani, Umesh; Vidick, Thomas
2017-11-01
One of the central challenges in the study of quantum many-body systems is the complexity of simulating them on a classical computer. A recent advance (Landau et al. in Nat Phys, 2015) gave a polynomial time algorithm to compute a succinct classical description for unique ground states of gapped 1D quantum systems. Despite this progress many questions remained unsolved, including whether there exist efficient algorithms when the ground space is degenerate (and of polynomial dimension in the system size), or for the polynomially many lowest energy states, or even whether such states admit succinct classical descriptions or area laws. In this paper we give a new algorithm, based on a rigorously justified RG type transformation, for finding low energy states for 1D Hamiltonians acting on a chain of n particles. In the process we resolve some of the aforementioned open questions, including giving a polynomial time algorithm for poly( n) degenerate ground spaces and an n O(log n) algorithm for the poly( n) lowest energy states (under a mild density condition). For these classes of systems the existence of a succinct classical description and area laws were not rigorously proved before this work. The algorithms are natural and efficient, and for the case of finding unique ground states for frustration-free Hamiltonians the running time is {\\tilde{O}(nM(n))} , where M( n) is the time required to multiply two n × n matrices.
NASA Astrophysics Data System (ADS)
Zhuang, Wei; Mountrakis, Giorgos
2014-09-01
Large footprint waveform LiDAR sensors have been widely used for numerous airborne studies. Ground peak identification in a large footprint waveform is a significant bottleneck in exploring full usage of the waveform datasets. In the current study, an accurate and computationally efficient algorithm was developed for ground peak identification, called Filtering and Clustering Algorithm (FICA). The method was evaluated on Land, Vegetation, and Ice Sensor (LVIS) waveform datasets acquired over Central NY. FICA incorporates a set of multi-scale second derivative filters and a k-means clustering algorithm in order to avoid detecting false ground peaks. FICA was tested in five different land cover types (deciduous trees, coniferous trees, shrub, grass and developed area) and showed more accurate results when compared to existing algorithms. More specifically, compared with Gaussian decomposition, the RMSE ground peak identification by FICA was 2.82 m (5.29 m for GD) in deciduous plots, 3.25 m (4.57 m for GD) in coniferous plots, 2.63 m (2.83 m for GD) in shrub plots, 0.82 m (0.93 m for GD) in grass plots, and 0.70 m (0.51 m for GD) in plots of developed areas. FICA performance was also relatively consistent under various slope and canopy coverage (CC) conditions. In addition, FICA showed better computational efficiency compared to existing methods. FICA's major computational and accuracy advantage is a result of the adopted multi-scale signal processing procedures that concentrate on local portions of the signal as opposed to the Gaussian decomposition that uses a curve-fitting strategy applied in the entire signal. The FICA algorithm is a good candidate for large-scale implementation on future space-borne waveform LiDAR sensors.
Fast template matching with polynomials.
Omachi, Shinichiro; Omachi, Masako
2007-08-01
Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.
On the evaluation of segmentation editing tools
Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.
2014-01-01
Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063
Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming
NASA Astrophysics Data System (ADS)
Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita
2018-03-01
We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.
An index-based algorithm for fast on-line query processing of latent semantic analysis
Li, Pohan; Wang, Wei
2017-01-01
Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm. PMID:28520747
An index-based algorithm for fast on-line query processing of latent semantic analysis.
Zhang, Mingxi; Li, Pohan; Wang, Wei
2017-01-01
Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm.
Efficient algorithms for computing a strong rank-revealing QR factorization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gu, M.; Eisenstat, S.C.
1996-07-01
Given an m x n matrix M with m {ge} n, it is shown that there exists a permutation {Pi} and an integer k such that the QR factorization given by equation (1) reveals the numerical rank of M: the k x k upper-triangular matrix A{sub k} is well conditioned, norm of (C{sub k}){sub 2} is small, and B{sub k} is linearly dependent on A{sub k} with coefficients bounded by a low-degree polynomial in n. Existing rank-revealing QR (RRQR) algorithms are related to such factorizations and two algorithms are presented for computing them. The new algorithms are nearly as efficientmore » as QR with column pivoting for most problems and take O(mn{sup 2}) floating-point operations in the worst case.« less
Nash equilibrium and multi criterion aerodynamic optimization
NASA Astrophysics Data System (ADS)
Tang, Zhili; Zhang, Lianhe
2016-06-01
Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.
Simulation of biochemical reactions with time-dependent rates by the rejection-based algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento, Trento
We address the problem of simulating biochemical reaction networks with time-dependent rates and propose a new algorithm based on our rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)]. The computation for selecting next reaction firings by our time-dependent RSSA (tRSSA) is computationally efficient. Furthermore, the generated trajectory is exact by exploiting the rejection-based mechanism. We benchmark tRSSA on different biological systems with varying forms of reaction rates to demonstrate its applicability and efficiency. We reveal that for nontrivial cases, the selection of reaction firings in existing algorithms introduces approximations because the integration of reactionmore » rates is very computationally demanding and simplifying assumptions are introduced. The selection of the next reaction firing by our approach is easier while preserving the exactness.« less
Loeffler, Troy D; Sepehri, Aliasghar; Chen, Bin
2015-09-08
Reformulation of existing Monte Carlo algorithms used in the study of grand canonical systems has yielded massive improvements in efficiency. Here we present an energy biasing scheme designed to address targeting issues encountered in particle swap moves using sophisticated algorithms such as the Aggregation-Volume-Bias and Unbonding-Bonding methods. Specifically, this energy biasing scheme allows a particle to be inserted to (or removed from) a region that is more acceptable. As a result, this new method showed a several-fold increase in insertion/removal efficiency in addition to an accelerated rate of convergence for the thermodynamic properties of the system.
A novel iris localization algorithm using correlation filtering
NASA Astrophysics Data System (ADS)
Pohit, Mausumi; Sharma, Jitu
2015-06-01
Fast and efficient segmentation of iris from the eye images is a primary requirement for robust database independent iris recognition. In this paper we have presented a new algorithm for computing the inner and outer boundaries of the iris and locating the pupil centre. Pupil-iris boundary computation is based on correlation filtering approach, whereas iris-sclera boundary is determined through one dimensional intensity mapping. The proposed approach is computationally less extensive when compared with the existing algorithms like Hough transform.
The efficiency of geophysical adjoint codes generated by automatic differentiation tools
NASA Astrophysics Data System (ADS)
Vlasenko, A. V.; Köhl, A.; Stammer, D.
2016-02-01
The accuracy of numerical models that describe complex physical or chemical processes depends on the choice of model parameters. Estimating an optimal set of parameters by optimization algorithms requires knowledge of the sensitivity of the process of interest to model parameters. Typically the sensitivity computation involves differentiation of the model, which can be performed by applying algorithmic differentiation (AD) tools to the underlying numerical code. However, existing AD tools differ substantially in design, legibility and computational efficiency. In this study we show that, for geophysical data assimilation problems of varying complexity, the performance of adjoint codes generated by the existing AD tools (i) Open_AD, (ii) Tapenade, (iii) NAGWare and (iv) Transformation of Algorithms in Fortran (TAF) can be vastly different. Based on simple test problems, we evaluate the efficiency of each AD tool with respect to computational speed, accuracy of the adjoint, the efficiency of memory usage, and the capability of each AD tool to handle modern FORTRAN 90-95 elements such as structures and pointers, which are new elements that either combine groups of variables or provide aliases to memory addresses, respectively. We show that, while operator overloading tools are the only ones suitable for modern codes written in object-oriented programming languages, their computational efficiency lags behind source transformation by orders of magnitude, rendering the application of these modern tools to practical assimilation problems prohibitive. In contrast, the application of source transformation tools appears to be the most efficient choice, allowing handling even large geophysical data assimilation problems. However, they can only be applied to numerical models written in earlier generations of programming languages. Our study indicates that applying existing AD tools to realistic geophysical problems faces limitations that urgently need to be solved to allow the continuous use of AD tools for solving geophysical problems on modern computer architectures.
Deist, T M; Gorissen, B L
2016-02-07
High-dose-rate brachytherapy is a tumor treatment method where a highly radioactive source is brought in close proximity to the tumor. In this paper we develop a simulated annealing algorithm to optimize the dwell times at preselected dwell positions to maximize tumor coverage under dose-volume constraints on the organs at risk. Compared to existing algorithms, our algorithm has advantages in terms of speed and objective value and does not require an expensive general purpose solver. Its success mainly depends on exploiting the efficiency of matrix multiplication and a careful selection of the neighboring states. In this paper we outline its details and make an in-depth comparison with existing methods using real patient data.
Learning to Predict Combinatorial Structures
NASA Astrophysics Data System (ADS)
Vembu, Shankar
2009-12-01
The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.
Analysis of labor employment assessment on production machine to minimize time production
NASA Astrophysics Data System (ADS)
Hernawati, Tri; Suliawati; Sari Gumay, Vita
2018-03-01
Every company both in the field of service and manufacturing always trying to pass efficiency of it’s resource use. One resource that has an important role is labor. Labor has different efficiency levels for different jobs anyway. Problems related to the optimal allocation of labor that has different levels of efficiency for different jobs are called assignment problems, which is a special case of linear programming. In this research, Analysis of Labor Employment Assesment on Production Machine to Minimize Time Production, in PT PDM is done by using Hungarian algorithm. The aim of the research is to get the assignment of optimal labor on production machine to minimize time production. The results showed that the assignment of existing labor is not suitable because the time of completion of the assignment is longer than the assignment by using the Hungarian algorithm. By applying the Hungarian algorithm obtained time savings of 16%.
Fair and efficient network congestion control based on minority game
NASA Astrophysics Data System (ADS)
Wang, Zuxi; Wang, Wen; Hu, Hanping; Deng, Zhaozhang
2011-12-01
Low link utility, RTT unfairness and unfairness of Multi-Bottleneck network are the existing problems in the present network congestion control algorithms at large. Through the analogy of network congestion control with the "El Farol Bar" problem, we establish a congestion control model based on minority game(MG), and then present a novel network congestion control algorithm based on the model. The result of simulations indicates that the proposed algorithm can make the achievements of link utility closing to 100%, zero packet lose rate, and small of queue size. Besides, the RTT unfairness and the unfairness of Multi-Bottleneck network can be solved, to achieve the max-min fairness in Multi-Bottleneck network, while efficiently weaken the "ping-pong" oscillation caused by the overall synchronization.
A new distributed systems scheduling algorithm: a swarm intelligence approach
NASA Astrophysics Data System (ADS)
Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi
2011-12-01
The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.
Xie, Xiurui; Qu, Hong; Yi, Zhang; Kurths, Jurgen
2017-06-01
The spiking neural network (SNN) is the third generation of neural networks and performs remarkably well in cognitive tasks, such as pattern recognition. The temporal neural encode mechanism found in biological hippocampus enables SNN to possess more powerful computation capability than networks with other encoding schemes. However, this temporal encoding approach requires neurons to process information serially on time, which reduces learning efficiency significantly. To keep the powerful computation capability of the temporal encoding mechanism and to overcome its low efficiency in the training of SNNs, a new training algorithm, the accurate synaptic-efficiency adjustment method is proposed in this paper. Inspired by the selective attention mechanism of the primate visual system, our algorithm selects only the target spike time as attention areas, and ignores voltage states of the untarget ones, resulting in a significant reduction of training time. Besides, our algorithm employs a cost function based on the voltage difference between the potential of the output neuron and the firing threshold of the SNN, instead of the traditional precise firing time distance. A normalized spike-timing-dependent-plasticity learning window is applied to assigning this error to different synapses for instructing their training. Comprehensive simulations are conducted to investigate the learning properties of our algorithm, with input neurons emitting both single spike and multiple spikes. Simulation results indicate that our algorithm possesses higher learning performance than the existing other methods and achieves the state-of-the-art efficiency in the training of SNN.
Adaptive Load-Balancing Algorithms using Symmetric Broadcast Networks
NASA Technical Reports Server (NTRS)
Das, Sajal K.; Harvey, Daniel J.; Biswas, Rupak; Biegel, Bryan A. (Technical Monitor)
2002-01-01
In a distributed computing environment, it is important to ensure that the processor workloads are adequately balanced, Among numerous load-balancing algorithms, a unique approach due to Das and Prasad defines a symmetric broadcast network (SBN) that provides a robust communication pattern among the processors in a topology-independent manner. In this paper, we propose and analyze three efficient SBN-based dynamic load-balancing algorithms, and implement them on an SGI Origin2000. A thorough experimental study with Poisson distributed synthetic loads demonstrates that our algorithms are effective in balancing system load. By optimizing completion time and idle time, the proposed algorithms are shown to compare favorably with several existing approaches.
ERGC: an efficient referential genome compression algorithm
Saha, Subrata; Rajasekaran, Sanguthevar
2015-01-01
Motivation: Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. Results: We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. Contact: rajasek@engr.uconn.edu PMID:26139636
Cloud computing-based TagSNP selection algorithm for human genome data.
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-05
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used.
Cloud Computing-Based TagSNP Selection Algorithm for Human Genome Data
Hung, Che-Lun; Chen, Wen-Pei; Hua, Guan-Jie; Zheng, Huiru; Tsai, Suh-Jen Jane; Lin, Yaw-Ling
2015-01-01
Single nucleotide polymorphisms (SNPs) play a fundamental role in human genetic variation and are used in medical diagnostics, phylogeny construction, and drug design. They provide the highest-resolution genetic fingerprint for identifying disease associations and human features. Haplotypes are regions of linked genetic variants that are closely spaced on the genome and tend to be inherited together. Genetics research has revealed SNPs within certain haplotype blocks that introduce few distinct common haplotypes into most of the population. Haplotype block structures are used in association-based methods to map disease genes. In this paper, we propose an efficient algorithm for identifying haplotype blocks in the genome. In chromosomal haplotype data retrieved from the HapMap project website, the proposed algorithm identified longer haplotype blocks than an existing algorithm. To enhance its performance, we extended the proposed algorithm into a parallel algorithm that copies data in parallel via the Hadoop MapReduce framework. The proposed MapReduce-paralleled combinatorial algorithm performed well on real-world data obtained from the HapMap dataset; the improvement in computational efficiency was proportional to the number of processors used. PMID:25569088
Rough sets and Laplacian score based cost-sensitive feature selection
Yu, Shenglong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of “good” features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms. PMID:29912884
Rough sets and Laplacian score based cost-sensitive feature selection.
Yu, Shenglong; Zhao, Hong
2018-01-01
Cost-sensitive feature selection learning is an important preprocessing step in machine learning and data mining. Recently, most existing cost-sensitive feature selection algorithms are heuristic algorithms, which evaluate the importance of each feature individually and select features one by one. Obviously, these algorithms do not consider the relationship among features. In this paper, we propose a new algorithm for minimal cost feature selection called the rough sets and Laplacian score based cost-sensitive feature selection. The importance of each feature is evaluated by both rough sets and Laplacian score. Compared with heuristic algorithms, the proposed algorithm takes into consideration the relationship among features with locality preservation of Laplacian score. We select a feature subset with maximal feature importance and minimal cost when cost is undertaken in parallel, where the cost is given by three different distributions to simulate different applications. Different from existing cost-sensitive feature selection algorithms, our algorithm simultaneously selects out a predetermined number of "good" features. Extensive experimental results show that the approach is efficient and able to effectively obtain the minimum cost subset. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.
Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.
Lin, Lanny; Goodrich, Michael A
2014-12-01
During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.
Efficient image compression algorithm for computer-animated images
NASA Astrophysics Data System (ADS)
Yfantis, Evangelos A.; Au, Matthew Y.; Miel, G.
1992-10-01
An image compression algorithm is described. The algorithm is an extension of the run-length image compression algorithm and its implementation is relatively easy. This algorithm was implemented and compared with other existing popular compression algorithms and with the Lempel-Ziv (LZ) coding. The Lempel-Ziv algorithm is available as a utility in the UNIX operating system and is also referred to as the UNIX uncompress. Sometimes our algorithm is best in terms of saving memory space, and sometimes one of the competing algorithms is best. The algorithm is lossless, and the intent is for the algorithm to be used in computer graphics animated images. Comparisons made with the LZ algorithm indicate that the decompression time using our algorithm is faster than that using the LZ algorithm. Once the data are in memory, a relatively simple and fast transformation is applied to uncompress the file.
An improved conjugate gradient scheme to the solution of least squares SVM.
Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya
2005-03-01
The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.
ECS: efficient communication scheduling for underwater sensor networks.
Hong, Lu; Hong, Feng; Guo, Zhongwen; Li, Zhengbao
2011-01-01
TDMA protocols have attracted a lot of attention for underwater acoustic sensor networks (UWSNs), because of the unique characteristics of acoustic signal propagation such as great energy consumption in transmission, long propagation delay and long communication range. Previous TDMA protocols all allocated transmission time to nodes based on discrete time slots. This paper proposes an efficient continuous time scheduling TDMA protocol (ECS) for UWSNs, including the continuous time based and sender oriented conflict analysis model, the transmission moment allocation algorithm and the distributed topology maintenance algorithm. Simulation results confirm that ECS improves network throughput by 20% on average, compared to existing MAC protocols.
A Fast Deep Learning System Using GPU
2014-06-01
hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and...widely used in data modeling until three decades later when efficient training algorithm for RBM is invented by Hinton [3] and the computing power is...be trained using most of optimization algorithms , such as BP, conjugate gradient descent (CGD) or Levenberg-Marquardt (LM). The advantage of this
AMLSA Algorithm for Hybrid Precoding in Millimeter Wave MIMO Systems
NASA Astrophysics Data System (ADS)
Liu, Fulai; Sun, Zhenxing; Du, Ruiyan; Bai, Xiaoyu
2017-10-01
In this paper, an effective algorithm will be proposed for hybrid precoding in mmWave MIMO systems, referred to as alternating minimization algorithm with the least squares amendment (AMLSA algorithm). To be specific, for the fully-connected structure, the presented algorithm is exploited to minimize the classical objective function and obtain the hybrid precoding matrix. It introduces an orthogonal constraint to the digital precoding matrix which is amended subsequently by the least squares after obtaining its alternating minimization iterative result. Simulation results confirm that the achievable spectral efficiency of our proposed algorithm is better to some extent than that of the existing algorithm without the least squares amendment. Furthermore, the number of iterations is reduced slightly via improving the initialization procedure.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering.
Bi, Xia-An; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.
Order priors for Bayesian network discovery with an application to malware phylogeny
Oyen, Diane; Anderson, Blake; Sentz, Kari; ...
2017-09-15
Here, Bayesian networks have been used extensively to model and discover dependency relationships among sets of random variables. We learn Bayesian network structure with a combination of human knowledge about the partial ordering of variables and statistical inference of conditional dependencies from observed data. Our approach leverages complementary information from human knowledge and inference from observed data to produce networks that reflect human beliefs about the system as well as to fit the observed data. Applying prior beliefs about partial orderings of variables is an approach distinctly different from existing methods that incorporate prior beliefs about direct dependencies (or edges)more » in a Bayesian network. We provide an efficient implementation of the partial-order prior in a Bayesian structure discovery learning algorithm, as well as an edge prior, showing that both priors meet the local modularity requirement necessary for an efficient Bayesian discovery algorithm. In benchmark studies, the partial-order prior improves the accuracy of Bayesian network structure learning as well as the edge prior, even though order priors are more general. Our primary motivation is in characterizing the evolution of families of malware to aid cyber security analysts. For the problem of malware phylogeny discovery, we find that our algorithm, compared to existing malware phylogeny algorithms, more accurately discovers true dependencies that are missed by other algorithms.« less
Order priors for Bayesian network discovery with an application to malware phylogeny
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyen, Diane; Anderson, Blake; Sentz, Kari
Here, Bayesian networks have been used extensively to model and discover dependency relationships among sets of random variables. We learn Bayesian network structure with a combination of human knowledge about the partial ordering of variables and statistical inference of conditional dependencies from observed data. Our approach leverages complementary information from human knowledge and inference from observed data to produce networks that reflect human beliefs about the system as well as to fit the observed data. Applying prior beliefs about partial orderings of variables is an approach distinctly different from existing methods that incorporate prior beliefs about direct dependencies (or edges)more » in a Bayesian network. We provide an efficient implementation of the partial-order prior in a Bayesian structure discovery learning algorithm, as well as an edge prior, showing that both priors meet the local modularity requirement necessary for an efficient Bayesian discovery algorithm. In benchmark studies, the partial-order prior improves the accuracy of Bayesian network structure learning as well as the edge prior, even though order priors are more general. Our primary motivation is in characterizing the evolution of families of malware to aid cyber security analysts. For the problem of malware phylogeny discovery, we find that our algorithm, compared to existing malware phylogeny algorithms, more accurately discovers true dependencies that are missed by other algorithms.« less
Pal, Partha S; Kar, R; Mandal, D; Ghoshal, S P
2015-11-01
This paper presents an efficient approach to identify different stable and practically useful Hammerstein models as well as unstable nonlinear process along with its stable closed loop counterpart with the help of an evolutionary algorithm as Colliding Bodies Optimization (CBO) optimization algorithm. The performance measures of the CBO based optimization approach such as precision, accuracy are justified with the minimum output mean square value (MSE) which signifies that the amount of bias and variance in the output domain are also the least. It is also observed that the optimization of output MSE in the presence of outliers has resulted in a very close estimation of the output parameters consistently, which also justifies the effective general applicability of the CBO algorithm towards the system identification problem and also establishes the practical usefulness of the applied approach. Optimum values of the MSEs, computational times and statistical information of the MSEs are all found to be the superior as compared with those of the other existing similar types of stochastic algorithms based approaches reported in different recent literature, which establish the robustness and efficiency of the applied CBO based identification scheme. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Semi-supervised and unsupervised extreme learning machines.
Huang, Gao; Song, Shiji; Gupta, Jatinder N D; Wu, Cheng
2014-12-01
Extreme learning machines (ELMs) have proven to be efficient and effective learning mechanisms for pattern classification and regression. However, ELMs are primarily applied to supervised learning problems. Only a few existing research papers have used ELMs to explore unlabeled data. In this paper, we extend ELMs for both semi-supervised and unsupervised tasks based on the manifold regularization, thus greatly expanding the applicability of ELMs. The key advantages of the proposed algorithms are as follows: 1) both the semi-supervised ELM (SS-ELM) and the unsupervised ELM (US-ELM) exhibit learning capability and computational efficiency of ELMs; 2) both algorithms naturally handle multiclass classification or multicluster clustering; and 3) both algorithms are inductive and can handle unseen data at test time directly. Moreover, it is shown in this paper that all the supervised, semi-supervised, and unsupervised ELMs can actually be put into a unified framework. This provides new perspectives for understanding the mechanism of random feature mapping, which is the key concept in ELM theory. Empirical study on a wide range of data sets demonstrates that the proposed algorithms are competitive with the state-of-the-art semi-supervised or unsupervised learning algorithms in terms of accuracy and efficiency.
He, Bo; Zhang, Shujing; Yan, Tianhong; Zhang, Tao; Liang, Yan; Zhang, Hongjin
2011-01-01
Mobile autonomous systems are very important for marine scientific investigation and military applications. Many algorithms have been studied to deal with the computational efficiency problem required for large scale simultaneous localization and mapping (SLAM) and its related accuracy and consistency. Among these methods, submap-based SLAM is a more effective one. By combining the strength of two popular mapping algorithms, the Rao-Blackwellised particle filter (RBPF) and extended information filter (EIF), this paper presents a combined SLAM-an efficient submap-based solution to the SLAM problem in a large scale environment. RBPF-SLAM is used to produce local maps, which are periodically fused into an EIF-SLAM algorithm. RBPF-SLAM can avoid linearization of the robot model during operating and provide a robust data association, while EIF-SLAM can improve the whole computational speed, and avoid the tendency of RBPF-SLAM to be over-confident. In order to further improve the computational speed in a real time environment, a binary-tree-based decision-making strategy is introduced. Simulation experiments show that the proposed combined SLAM algorithm significantly outperforms currently existing algorithms in terms of accuracy and consistency, as well as the computing efficiency. Finally, the combined SLAM algorithm is experimentally validated in a real environment by using the Victoria Park dataset.
Uni10: an open-source library for tensor network algorithms
NASA Astrophysics Data System (ADS)
Kao, Ying-Jer; Hsieh, Yun-Da; Chen, Pochung
2015-09-01
We present an object-oriented open-source library for developing tensor network algorithms written in C++ called Uni10. With Uni10, users can build a symmetric tensor from a collection of bonds, while the bonds are constructed from a list of quantum numbers associated with different quantum states. It is easy to label and permute the indices of the tensors and access a block associated with a particular quantum number. Furthermore a network class is used to describe arbitrary tensor network structure and to perform network contractions efficiently. We give an overview of the basic structure of the library and the hierarchy of the classes. We present examples of the construction of a spin-1 Heisenberg Hamiltonian and the implementation of the tensor renormalization group algorithm to illustrate the basic usage of the library. The library described here is particularly well suited to explore and fast prototype novel tensor network algorithms and to implement highly efficient codes for existing algorithms.
Shah, Sohil Atul
2017-01-01
Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838
An Energy-Efficient Game-Theory-Based Spectrum Decision Scheme for Cognitive Radio Sensor Networks
Salim, Shelly; Moh, Sangman
2016-01-01
A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. In this paper, we propose an energy-efficient game-theory-based spectrum decision (EGSD) scheme for CRSNs to prolong the network lifetime. Note that energy efficiency is the most important design consideration in CRSNs because it determines the network lifetime. The central part of the EGSD scheme consists of two spectrum selection algorithms: random selection and game-theory-based selection. The EGSD scheme also includes a clustering algorithm, spectrum characterization with a Markov chain, and cluster member coordination. Our performance study shows that EGSD outperforms the existing popular framework in terms of network lifetime and coordination overhead. PMID:27376290
An Energy-Efficient Game-Theory-Based Spectrum Decision Scheme for Cognitive Radio Sensor Networks.
Salim, Shelly; Moh, Sangman
2016-06-30
A cognitive radio sensor network (CRSN) is a wireless sensor network in which sensor nodes are equipped with cognitive radio. In this paper, we propose an energy-efficient game-theory-based spectrum decision (EGSD) scheme for CRSNs to prolong the network lifetime. Note that energy efficiency is the most important design consideration in CRSNs because it determines the network lifetime. The central part of the EGSD scheme consists of two spectrum selection algorithms: random selection and game-theory-based selection. The EGSD scheme also includes a clustering algorithm, spectrum characterization with a Markov chain, and cluster member coordination. Our performance study shows that EGSD outperforms the existing popular framework in terms of network lifetime and coordination overhead.
Shi, Juanfei; Calveras, Anna; Cheng, Ye; Liu, Kai
2013-05-15
The extensive usage of wireless sensor networks (WSNs) has led to the development of many power- and energy-efficient routing protocols. Cooperative routing in WSNs can improve performance in these types of networks. In this paper we discuss the existing proposals and we propose a routing algorithm for wireless sensor networks called Power Efficient Location-based Cooperative Routing with Transmission Power-upper-limit (PELCR-TP). The algorithm is based on the principle of minimum link power and aims to take advantage of nodes cooperation to make the link work well in WSNs with a low transmission power. In the proposed scheme, with a determined transmission power upper limit, nodes find the most appropriate next nodes and single-relay nodes with the proposed algorithm. Moreover, this proposal subtly avoids non-working nodes, because we add a Bad nodes Avoidance Strategy (BAS). Simulation results show that the proposed algorithm with BAS can significantly improve the performance in reducing the overall link power, enhancing the transmission success rate and decreasing the retransmission rate.
Shi, Juanfei; Calveras, Anna; Cheng, Ye; Liu, Kai
2013-01-01
The extensive usage of wireless sensor networks (WSNs) has led to the development of many power- and energy-efficient routing protocols. Cooperative routing in WSNs can improve performance in these types of networks. In this paper we discuss the existing proposals and we propose a routing algorithm for wireless sensor networks called Power Efficient Location-based Cooperative Routing with Transmission Power-upper-limit (PELCR-TP). The algorithm is based on the principle of minimum link power and aims to take advantage of nodes cooperation to make the link work well in WSNs with a low transmission power. In the proposed scheme, with a determined transmission power upper limit, nodes find the most appropriate next nodes and single-relay nodes with the proposed algorithm. Moreover, this proposal subtly avoids non-working nodes, because we add a Bad nodes Avoidance Strategy (BAS). Simulation results show that the proposed algorithm with BAS can significantly improve the performance in reducing the overall link power, enhancing the transmission success rate and decreasing the retransmission rate. PMID:23676625
Performance of b-jet identification in the ATLAS experiment
Aad, G; Abbott, B; Abdallah, J; ...
2016-04-04
The identification of jets containing b hadrons is important for the physics programme of the ATLAS experiment at the Large Hadron Collider. Several algorithms to identify jets containing b hadrons are described, ranging from those based on the reconstruction of an inclusive secondary vertex or the presence of tracks with large impact parameters to combined tagging algorithms making use of multi-variate discriminants. An independent b-tagging algorithm based on the reconstruction of muons inside jets as well as the b-tagging algorithm used in the online trigger are also presented. The b-jet tagging efficiency, the c-jet tagging efficiency and the mistag ratemore » for light flavour jets in data have been measured with a number of complementary methods. The calibration results are presented as scale factors defined as the ratio of the efficiency (or mistag rate) in data to that in simulation. In the case of b jets, where more than one calibration method exists, the results from the various analyses have been combined taking into account the statistical correlation as well as the correlation of the sources of systematic uncertainty.« less
BCM: toolkit for Bayesian analysis of Computational Models using samplers.
Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A
2016-10-21
Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.
Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting
2015-01-01
Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption. PMID:26131666
Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting
2015-06-26
Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption.
$n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.
Wu, Yue; Hua, Zhongyun; Zhou, Yicong
2016-11-01
Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.
A generalized Condat's algorithm of 1D total variation regularization
NASA Astrophysics Data System (ADS)
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly
2017-09-01
A common way for solving the denosing problem is to utilize the total variation (TV) regularization. Many efficient numerical algorithms have been developed for solving the TV regularization problem. Condat described a fast direct algorithm to compute the processed 1D signal. Also there exists a direct algorithm with a linear time for 1D TV denoising referred to as the taut string algorithm. The Condat's algorithm is based on a dual problem to the 1D TV regularization. In this paper, we propose a variant of the Condat's algorithm based on the direct 1D TV regularization problem. The usage of the Condat's algorithm with the taut string approach leads to a clear geometric description of the extremal function. Computer simulation results are provided to illustrate the performance of the proposed algorithm for restoration of degraded signals.
Photon-efficient super-resolution laser radar
NASA Astrophysics Data System (ADS)
Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.
2017-08-01
The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
Efficient method of image edge detection based on FSVM
NASA Astrophysics Data System (ADS)
Cai, Aiping; Xiong, Xiaomei
2013-07-01
For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.
NASA Astrophysics Data System (ADS)
Work, Paul R.
1991-12-01
This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.
A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering
Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani
2012-01-01
Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386
Topology design and performance analysis of an integrated communication network
NASA Technical Reports Server (NTRS)
Li, V. O. K.; Lam, Y. F.; Hou, T. C.; Yuen, J. H.
1985-01-01
A research study on the topology design and performance analysis for the Space Station Information System (SSIS) network is conducted. It is begun with a survey of existing research efforts in network topology design. Then a new approach for topology design is presented. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. The algorithm for generating subsets is described in detail, and various aspects of the overall design procedure are discussed. Two more efficient versions of this algorithm (applicable in specific situations) are also given. Next, two important aspects of network performance analysis: network reliability and message delays are discussed. A new model is introduced to study the reliability of a network with dependent failures. For message delays, a collection of formulas from existing research results is given to compute or estimate the delays of messages in a communication network without making the independence assumption. The design algorithm coded in PASCAL is included as an appendix.
Spectral unmixing of urban land cover using a generic library approach
NASA Astrophysics Data System (ADS)
Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben
2016-10-01
Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wayne F. Boyer; Gurdeep S. Hura
2005-09-01
The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized taskmore » orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,« less
ERGC: an efficient referential genome compression algorithm.
Saha, Subrata; Rajasekaran, Sanguthevar
2015-11-01
Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
ECS: Efficient Communication Scheduling for Underwater Sensor Networks
Hong, Lu; Hong, Feng; Guo, Zhongwen; Li, Zhengbao
2011-01-01
TDMA protocols have attracted a lot of attention for underwater acoustic sensor networks (UWSNs), because of the unique characteristics of acoustic signal propagation such as great energy consumption in transmission, long propagation delay and long communication range. Previous TDMA protocols all allocated transmission time to nodes based on discrete time slots. This paper proposes an efficient continuous time scheduling TDMA protocol (ECS) for UWSNs, including the continuous time based and sender oriented conflict analysis model, the transmission moment allocation algorithm and the distributed topology maintenance algorithm. Simulation results confirm that ECS improves network throughput by 20% on average, compared to existing MAC protocols. PMID:22163775
Variational optimization algorithms for uniform matrix product states
NASA Astrophysics Data System (ADS)
Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.
2018-01-01
We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.
A space-efficient algorithm for local similarities.
Huang, X Q; Hardison, R C; Miller, W
1990-10-01
Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.
A biomimetic algorithm for the improved detection of microarray features
NASA Astrophysics Data System (ADS)
Nicolau, Dan V., Jr.; Nicolau, Dan V.; Maini, Philip K.
2007-02-01
One the major difficulties of microarray technology relate to the processing of large and - importantly - error-loaded images of the dots on the chip surface. Whatever the source of these errors, those obtained in the first stage of data acquisition - segmentation - are passed down to the subsequent processes, with deleterious results. As it has been demonstrated recently that biological systems have evolved algorithms that are mathematically efficient, this contribution attempts to test an algorithm that mimics a bacterial-"patented" algorithm for the search of available space and nutrients to find, "zero-in" and eventually delimitate the features existent on the microarray surface.
Towards a hybrid energy efficient multi-tree-based optimized routing protocol for wireless networks.
Mitton, Nathalie; Razafindralambo, Tahiry; Simplot-Ryl, David; Stojmenovic, Ivan
2012-12-13
This paper considers the problem of designing power efficient routing with guaranteed delivery for sensor networks with unknown geographic locations. We propose HECTOR, a hybrid energy efficient tree-based optimized routing protocol, based on two sets of virtual coordinates. One set is based on rooted tree coordinates, and the other is based on hop distances toward several landmarks. In HECTOR, the node currently holding the packet forwards it to its neighbor that optimizes ratio of power cost over distance progress with landmark coordinates, among nodes that reduce landmark coordinates and do not increase distance in tree coordinates. If such a node does not exist, then forwarding is made to the neighbor that reduces tree-based distance only and optimizes power cost over tree distance progress ratio. We theoretically prove the packet delivery and propose an extension based on the use of multiple trees. Our simulations show the superiority of our algorithm over existing alternatives while guaranteeing delivery, and only up to 30% additional power compared to centralized shortest weighted path algorithm.
Towards a Hybrid Energy Efficient Multi-Tree-Based Optimized Routing Protocol for Wireless Networks
Mitton, Nathalie; Razafindralambo, Tahiry; Simplot-Ryl, David; Stojmenovic, Ivan
2012-01-01
This paper considers the problem of designing power efficient routing with guaranteed delivery for sensor networks with unknown geographic locations. We propose HECTOR, a hybrid energy efficient tree-based optimized routing protocol, based on two sets of virtual coordinates. One set is based on rooted tree coordinates, and the other is based on hop distances toward several landmarks. In HECTOR, the node currently holding the packet forwards it to its neighbor that optimizes ratio of power cost over distance progress with landmark coordinates, among nodes that reduce landmark coordinates and do not increase distance in tree coordinates. If such a node does not exist, then forwarding is made to the neighbor that reduces tree-based distance only and optimizes power cost over tree distance progress ratio. We theoretically prove the packet delivery and propose an extension based on the use of multiple trees. Our simulations show the superiority of our algorithm over existing alternatives while guaranteeing delivery, and only up to 30% additional power compared to centralized shortest weighted path algorithm. PMID:23443398
Texture functions in image analysis: A computationally efficient solution
NASA Technical Reports Server (NTRS)
Cox, S. C.; Rose, J. F.
1983-01-01
A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2014-11-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud masks were designed to be numerically efficient and suited for the processing of large amounts of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient amounts of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Bayesian cloud detection for MERIS, AATSR, and their combination
NASA Astrophysics Data System (ADS)
Hollstein, A.; Fischer, J.; Carbajal Henken, C.; Preusker, R.
2015-04-01
A broad range of different of Bayesian cloud detection schemes is applied to measurements from the Medium Resolution Imaging Spectrometer (MERIS), the Advanced Along-Track Scanning Radiometer (AATSR), and their combination. The cloud detection schemes were designed to be numerically efficient and suited for the processing of large numbers of data. Results from the classical and naive approach to Bayesian cloud masking are discussed for MERIS and AATSR as well as for their combination. A sensitivity study on the resolution of multidimensional histograms, which were post-processed by Gaussian smoothing, shows how theoretically insufficient numbers of truth data can be used to set up accurate classical Bayesian cloud masks. Sets of exploited features from single and derived channels are numerically optimized and results for naive and classical Bayesian cloud masks are presented. The application of the Bayesian approach is discussed in terms of reproducing existing algorithms, enhancing existing algorithms, increasing the robustness of existing algorithms, and on setting up new classification schemes based on manually classified scenes.
Improved Ant Algorithms for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi
2014-01-01
Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391
Hierarchical trie packet classification algorithm based on expectation-maximization clustering
Bi, Xia-an; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476
Efficient sequential and parallel algorithms for record linkage.
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.
Modeling Group Interactions via Open Data Sources
2011-08-30
data. The state-of-art search engines are designed to help general query-specific search and not suitable for finding disconnected online groups. The...groups, (2) developing innovative mathematical and statistical models and efficient algorithms that leverage existing search engines and employ
2D-RBUC for efficient parallel compression of residuals
NASA Astrophysics Data System (ADS)
Đurđević, Đorđe M.; Tartalja, Igor I.
2018-02-01
In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.
Nguyen, Andrew H; Molinero, Valeria
2015-07-23
Clathrate hydrates and ice I are the most abundant crystals of water. The study of their nucleation, growth, and decomposition using molecular simulations requires an accurate and efficient algorithm that distinguishes water molecules that belong to each of these crystals and the liquid phase. Existing algorithms identify ice or clathrates, but not both. This poses a challenge for cases in which ice and hydrate coexist, such as in the synthesis of clathrates from ice and the formation of ice from clathrates during self-preservation of methane hydrates. Here we present an efficient algorithm for the identification of clathrate hydrates, hexagonal ice, cubic ice, and liquid water in molecular simulations. CHILL+ uses the number of staggered and eclipsed water-water bonds to identify water molecules in cubic ice, hexagonal ice, and clathrate hydrate. CHILL+ is an extension of CHILL (Moore et al. Phys. Chem. Chem. Phys. 2010, 12, 4124-4134), which identifies hexagonal and cubic ice but not clathrates. In addition to the identification of hydrates, CHILL+ significantly improves the detection of hexagonal ice up to its melting point. We validate the use of CHILL+ for the identification of stacking faults in ice and the nucleation and growth of clathrate hydrates. To our knowledge, this is the first algorithm that allows for the simultaneous identification of ice and clathrate hydrates, and it does so in a way that is competitive with respect to existing methods used to identify any of these crystals.
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-01-01
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method. PMID:28825684
An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.
Li, Jian; Wei, Xinguo; Zhang, Guangjun
2017-08-21
Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.
A Novel Particle Swarm Optimization Algorithm for Global Optimization
Wang, Chun-Feng; Liu, Kui
2016-01-01
Particle Swarm Optimization (PSO) is a recently developed optimization method, which has attracted interest of researchers in various areas due to its simplicity and effectiveness, and many variants have been proposed. In this paper, a novel Particle Swarm Optimization algorithm is presented, in which the information of the best neighbor of each particle and the best particle of the entire population in the current iteration is considered. Meanwhile, to avoid premature, an abandoned mechanism is used. Furthermore, for improving the global convergence speed of our algorithm, a chaotic search is adopted in the best solution of the current iteration. To verify the performance of our algorithm, standard test functions have been employed. The experimental results show that the algorithm is much more robust and efficient than some existing Particle Swarm Optimization algorithms. PMID:26955387
A Benders based rolling horizon algorithm for a dynamic facility location problem
Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.
2016-06-28
This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less
Scaled Runge-Kutta algorithms for handling dense output
NASA Technical Reports Server (NTRS)
Horn, M. K.
1981-01-01
Low order Runge-Kutta algorithms are developed which determine the solution of a system of ordinary differential equations at any point within a given integration step, as well as at the end of each step. The scaled Runge-Kutta methods are designed to be used with existing Runge-Kutta formulas, using the derivative evaluations of these defining algorithms as the core of the system. For a slight increase in computing time, the solution may be generated within the integration step, improving the efficiency of the Runge-Kutta algorithms, since the step length need no longer be severely reduced to coincide with the desired output point. Scaled Runge-Kutta algorithms are presented for orders 3 through 5, along with accuracy comparisons between the defining algorithms and their scaled versions for a test problem.
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
NASA Astrophysics Data System (ADS)
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
Line-drawing algorithms for parallel machines
NASA Technical Reports Server (NTRS)
Pang, Alex T.
1990-01-01
The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.
Pattern Discovery and Change Detection of Online Music Query Streams
NASA Astrophysics Data System (ADS)
Li, Hua-Fu
In this paper, an efficient stream mining algorithm, called FTP-stream (Frequent Temporal Pattern mining of streams), is proposed to find the frequent temporal patterns over melody sequence streams. In the framework of our proposed algorithm, an effective bit-sequence representation is used to reduce the time and memory needed to slide the windows. The FTP-stream algorithm can calculate the support threshold in only a single pass based on the concept of bit-sequence representation. It takes the advantage of "left" and "and" operations of the representation. Experiments show that the proposed algorithm only scans the music query stream once, and runs significant faster and consumes less memory than existing algorithms, such as SWFI-stream and Moment.
Lee, Juyong; Lee, Jinhyuk; Sasaki, Takeshi N; Sasai, Masaki; Seok, Chaok; Lee, Jooyoung
2011-08-01
Ab initio protein structure prediction is a challenging problem that requires both an accurate energetic representation of a protein structure and an efficient conformational sampling method for successful protein modeling. In this article, we present an ab initio structure prediction method which combines a recently suggested novel way of fragment assembly, dynamic fragment assembly (DFA) and conformational space annealing (CSA) algorithm. In DFA, model structures are scored by continuous functions constructed based on short- and long-range structural restraint information from a fragment library. Here, DFA is represented by the full-atom model by CHARMM with the addition of the empirical potential of DFIRE. The relative contributions between various energy terms are optimized using linear programming. The conformational sampling was carried out with CSA algorithm, which can find low energy conformations more efficiently than simulated annealing used in the existing DFA study. The newly introduced DFA energy function and CSA sampling algorithm are implemented into CHARMM. Test results on 30 small single-domain proteins and 13 template-free modeling targets of the 8th Critical Assessment of protein Structure Prediction show that the current method provides comparable and complementary prediction results to existing top methods. Copyright © 2011 Wiley-Liss, Inc.
Complexity of the Quantum Adiabatic Algorithm
NASA Astrophysics Data System (ADS)
Hen, Itay
2013-03-01
The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorihms. Here, we discuss several aspects of the quantum adiabatic algorithm: We analyze the efficiency of the algorithm on several ``hard'' (NP) computational problems. Studying the size dependence of the typical minimum energy gap of the Hamiltonians of these problems using quantum Monte Carlo methods, we find that while for most problems the minimum gap decreases exponentially with the size of the problem, indicating that the QAA is not more efficient than existing classical search algorithms, for other problems there is evidence to suggest that the gap may be polynomial near the phase transition. We also discuss applications of the QAA to ``real life'' problems and how they can be implemented on currently available (albeit prototypical) quantum hardware such as ``D-Wave One'', that impose serious restrictions as to which type of problems may be tested. Finally, we discuss different approaches to find improved implementations of the algorithm such as local adiabatic evolution, adaptive methods, local search in Hamiltonian space and others.
NASA Astrophysics Data System (ADS)
Dash, Rajashree
2017-11-01
Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.
A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.
Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F
2018-03-01
Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.
DOGMA: A Disk-Oriented Graph Matching Algorithm for RDF Databases
NASA Astrophysics Data System (ADS)
Bröcheler, Matthias; Pugliese, Andrea; Subrahmanian, V. S.
RDF is an increasingly important paradigm for the representation of information on the Web. As RDF databases increase in size to approach tens of millions of triples, and as sophisticated graph matching queries expressible in languages like SPARQL become increasingly important, scalability becomes an issue. To date, there is no graph-based indexing method for RDF data where the index was designed in a way that makes it disk-resident. There is therefore a growing need for indexes that can operate efficiently when the index itself resides on disk. In this paper, we first propose the DOGMA index for fast subgraph matching on disk and then develop a basic algorithm to answer queries over this index. This algorithm is then significantly sped up via an optimized algorithm that uses efficient (but correct) pruning strategies when combined with two different extensions of the index. We have implemented a preliminary system and tested it against four existing RDF database systems developed by others. Our experiments show that our algorithm performs very well compared to these systems, with orders of magnitude improvements for complex graph queries.
Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks
Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott; ...
2017-08-29
Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less
Rapid Calculation of Max-Min Fair Rates for Multi-Commodity Flows in Fat-Tree Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mollah, Md Atiqul; Yuan, Xin; Pakin, Scott
Max-min fairness is often used in the performance modeling of interconnection networks. Existing methods to compute max-min fair rates for multi-commodity flows have high complexity and are computationally infeasible for large networks. In this paper, we show that by considering topological features, this problem can be solved efficiently for the fat-tree topology that is widely used in data centers and high performance compute clusters. Several efficient new algorithms are developed for this problem, including a parallel algorithm that can take advantage of multi-core and shared-memory architectures. Using these algorithms, we demonstrate that it is possible to find the max-min fairmore » rate allocation for multi-commodity flows in fat-tree networks that support tens of thousands of nodes. We evaluate the run-time performance of the proposed algorithms and show improvement in orders of magnitude over the previously best known method. Finally, we further demonstrate a new application of max-min fair rate allocation that is only computationally feasible using our new algorithms.« less
An image segmentation method based on fuzzy C-means clustering and Cuckoo search algorithm
NASA Astrophysics Data System (ADS)
Wang, Mingwei; Wan, Youchuan; Gao, Xianjun; Ye, Zhiwei; Chen, Maolin
2018-04-01
Image segmentation is a significant step in image analysis and machine vision. Many approaches have been presented in this topic; among them, fuzzy C-means (FCM) clustering is one of the most widely used methods for its high efficiency and ambiguity of images. However, the success of FCM could not be guaranteed because it easily traps into local optimal solution. Cuckoo search (CS) is a novel evolutionary algorithm, which has been tested on some optimization problems and proved to be high-efficiency. Therefore, a new segmentation technique using FCM and blending of CS algorithm is put forward in the paper. Further, the proposed method has been measured on several images and compared with other existing FCM techniques such as genetic algorithm (GA) based FCM and particle swarm optimization (PSO) based FCM in terms of fitness value. Experimental results indicate that the proposed method is robust, adaptive and exhibits the better performance than other methods involved in the paper.
Real-time motion-based H.263+ frame rate control
NASA Astrophysics Data System (ADS)
Song, Hwangjun; Kim, JongWon; Kuo, C.-C. Jay
1998-12-01
Most existing H.263+ rate control algorithms, e.g. the one adopted in the test model of the near-term (TMN8), focus on the macroblock layer rate control and low latency under the assumptions of with a constant frame rate and through a constant bit rate (CBR) channel. These algorithms do not accommodate the transmission bandwidth fluctuation efficiently, and the resulting video quality can be degraded. In this work, we propose a new H.263+ rate control scheme which supports the variable bit rate (VBR) channel through the adjustment of the encoding frame rate and quantization parameter. A fast algorithm for the encoding frame rate control based on the inherent motion information within a sliding window in the underlying video is developed to efficiently pursue a good tradeoff between spatial and temporal quality. The proposed rate control algorithm also takes the time-varying bandwidth characteristic of the Internet into account and is able to accommodate the change accordingly. Experimental results are provided to demonstrate the superior performance of the proposed scheme.
Real-time depth camera tracking with geometrically stable weight algorithm
NASA Astrophysics Data System (ADS)
Fu, Xingyin; Zhu, Feng; Qi, Feng; Wang, Mingming
2017-03-01
We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.
NASA Astrophysics Data System (ADS)
Karami, Fahd; Ziad, Lamia; Sadik, Khadija
2017-12-01
In this paper, we focus on a numerical method of a problem called the Perona-Malik inequality which we use for image denoising. This model is obtained as the limit of the Perona-Malik model and the p-Laplacian operator with p→ ∞. In Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014), the authors have proved the existence and uniqueness of the solution of the proposed model. However, in their work, they used the explicit numerical scheme for approximated problem which is strongly dependent to the parameter p. To overcome this, we use in this work an efficient algorithm which is a combination of the classical additive operator splitting and a nonlinear relaxation algorithm. At last, we have presented the experimental results in image filtering show, which demonstrate the efficiency and effectiveness of our algorithm and finally, we have compared it with the previous scheme presented in Atlas et al., (Nonlinear Anal. Real World Appl 18:57-68, 2014).
Application of Biased Metropolis Algorithms: From protons to proteins
Bazavov, Alexei; Berg, Bernd A.; Zhou, Huan-Xiang
2015-01-01
We show that sampling with a biased Metropolis scheme is essentially equivalent to using the heatbath algorithm. However, the biased Metropolis method can also be applied when an efficient heatbath algorithm does not exist. This is first illustrated with an example from high energy physics (lattice gauge theory simulations). We then illustrate the Rugged Metropolis method, which is based on a similar biased updating scheme, but aims at very different applications. The goal of such applications is to locate the most likely configurations in a rugged free energy landscape, which is most relevant for simulations of biomolecules. PMID:26612967
NASA Astrophysics Data System (ADS)
Li, Dong-xia; Ye, Qian-wen
Out-of-band radiation suppression algorithm must be used efficiently for broadband aeronautical communication system in order not to interfere the operation of the existing systems in aviation L-Band. Based on the simple introduction of the broadband aeronautical multi-carrier communication (B-AMC) system model, several sidelobe suppression techniques in orthogonal frequency multiplexing (OFDM) system are presented and analyzed so as to find a suitable algorithm for B-AMC system in this paper. Simulation results show that raise-cosine function windowing can suppress the out-of-band radiation of B-AMC system effectively.
Highly Efficient Compression Algorithms for Multichannel EEG.
Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda
2018-05-01
The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.
Improved transition path sampling methods for simulation of rare events
NASA Astrophysics Data System (ADS)
Chopra, Manan; Malshe, Rohit; Reddy, Allam S.; de Pablo, J. J.
2008-04-01
The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.
Efficient parallel and out of core algorithms for constructing large bi-directed de Bruijn graphs.
Kundeti, Vamsi K; Rajasekaran, Sanguthevar; Dinh, Hieu; Vaughn, Matthew; Thapar, Vishal
2010-11-15
Assembling genomic sequences from a set of overlapping reads is one of the most fundamental problems in computational biology. Algorithms addressing the assembly problem fall into two broad categories - based on the data structures which they employ. The first class uses an overlap/string graph and the second type uses a de Bruijn graph. However with the recent advances in short read sequencing technology, de Bruijn graph based algorithms seem to play a vital role in practice. Efficient algorithms for building these massive de Bruijn graphs are very essential in large sequencing projects based on short reads. In an earlier work, an O(n/p) time parallel algorithm has been given for this problem. Here n is the size of the input and p is the number of processors. This algorithm enumerates all possible bi-directed edges which can overlap with a node and ends up generating Θ(nΣ) messages (Σ being the size of the alphabet). In this paper we present a Θ(n/p) time parallel algorithm with a communication complexity that is equal to that of parallel sorting and is not sensitive to Σ. The generality of our algorithm makes it very easy to extend it even to the out-of-core model and in this case it has an optimal I/O complexity of Θ(nlog(n/B)Blog(M/B)) (M being the main memory size and B being the size of the disk block). We demonstrate the scalability of our parallel algorithm on a SGI/Altix computer. A comparison of our algorithm with the previous approaches reveals that our algorithm is faster--both asymptotically and practically. We demonstrate the scalability of our sequential out-of-core algorithm by comparing it with the algorithm used by VELVET to build the bi-directed de Bruijn graph. Our experiments reveal that our algorithm can build the graph with a constant amount of memory, which clearly outperforms VELVET. We also provide efficient algorithms for the bi-directed chain compaction problem. The bi-directed de Bruijn graph is a fundamental data structure for any sequence assembly program based on Eulerian approach. Our algorithms for constructing Bi-directed de Bruijn graphs are efficient in parallel and out of core settings. These algorithms can be used in building large scale bi-directed de Bruijn graphs. Furthermore, our algorithms do not employ any all-to-all communications in a parallel setting and perform better than the prior algorithms. Finally our out-of-core algorithm is extremely memory efficient and can replace the existing graph construction algorithm in VELVET.
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks
Hammad, Karim; El Bakly, Ahmed M.
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem—subject to various Quality-of-Service (QoS) constraints—represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms. PMID:29509760
A memetic optimization algorithm for multi-constrained multicast routing in ad hoc networks.
Ramadan, Rahab M; Gasser, Safa M; El-Mahallawy, Mohamed S; Hammad, Karim; El Bakly, Ahmed M
2018-01-01
A mobile ad hoc network is a conventional self-configuring network where the routing optimization problem-subject to various Quality-of-Service (QoS) constraints-represents a major challenge. Unlike previously proposed solutions, in this paper, we propose a memetic algorithm (MA) employing an adaptive mutation parameter, to solve the multicast routing problem with higher search ability and computational efficiency. The proposed algorithm utilizes an updated scheme, based on statistical analysis, to estimate the best values for all MA parameters and enhance MA performance. The numerical results show that the proposed MA improved the delay and jitter of the network, while reducing computational complexity as compared to existing algorithms.
A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences.
Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen
2015-01-01
Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method.
Efficient sequential and parallel algorithms for record linkage
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837
Adaptive intercolor error prediction coder for lossless color (rgb) picutre compression
NASA Astrophysics Data System (ADS)
Mann, Y.; Peretz, Y.; Mitchell, Harvey B.
2001-09-01
Most of the current lossless compression algorithms, including the new international baseline JPEG-LS algorithm, do not exploit the interspectral correlations that exist between the color planes in an input color picture. To improve the compression performance (i.e., lower the bit rate) it is necessary to exploit these correlations. A major concern is to find efficient methods for exploiting the correlations that, at the same time, are compatible with and can be incorporated into the JPEG-LS algorithm. One such algorithm is the method of intercolor error prediction (IEP), which when used with the JPEG-LS algorithm, results on average in a reduction of 8% in the overall bit rate. We show how the IEP algorithm can be simply modified and that it nearly doubles the size of the reduction in bit rate to 15%.
NASA Astrophysics Data System (ADS)
Ren, Feixiang; Huang, Jinsheng; Terauchi, Mutsuhiro; Jiang, Ruyi; Klette, Reinhard
A robust and efficient lane detection system is an essential component of Lane Departure Warning Systems, which are commonly used in many vision-based Driver Assistance Systems (DAS) in intelligent transportation. Various computation platforms have been proposed in the past few years for the implementation of driver assistance systems (e.g., PC, laptop, integrated chips, PlayStation, and so on). In this paper, we propose a new platform for the implementation of lane detection, which is based on a mobile phone (the iPhone). Due to physical limitations of the iPhone w.r.t. memory and computing power, a simple and efficient lane detection algorithm using a Hough transform is developed and implemented on the iPhone, as existing algorithms developed based on the PC platform are not suitable for mobile phone devices (currently). Experiments of the lane detection algorithm are made both on PC and on iPhone.
An Efficient Solution Method for Multibody Systems with Loops Using Multiple Processors
NASA Technical Reports Server (NTRS)
Ghosh, Tushar K.; Nguyen, Luong A.; Quiocho, Leslie J.
2015-01-01
This paper describes a multibody dynamics algorithm formulated for parallel implementation on multiprocessor computing platforms using the divide-and-conquer approach. The system of interest is a general topology of rigid and elastic articulated bodies with or without loops. The algorithm divides the multibody system into a number of smaller sets of bodies in chain or tree structures, called "branches" at convenient joints called "connection points", and uses an Order-N (O (N)) approach to formulate the dynamics of each branch in terms of the unknown spatial connection forces. The equations of motion for the branches, leaving the connection forces as unknowns, are implemented in separate processors in parallel for computational efficiency, and the equations for all the unknown connection forces are synthesized and solved in one or several processors. The performances of two implementations of this divide-and-conquer algorithm in multiple processors are compared with an existing method implemented on a single processor.
Exact Algorithms for Duplication-Transfer-Loss Reconciliation with Non-Binary Gene Trees.
Kordi, Misagh; Bansal, Mukul S
2017-06-01
Duplication-Transfer-Loss (DTL) reconciliation is a powerful method for studying gene family evolution in the presence of horizontal gene transfer. DTL reconciliation seeks to reconcile gene trees with species trees by postulating speciation, duplication, transfer, and loss events. Efficient algorithms exist for finding optimal DTL reconciliations when the gene tree is binary. In practice, however, gene trees are often non-binary due to uncertainty in the gene tree topologies, and DTL reconciliation with non-binary gene trees is known to be NP-hard. In this paper, we present the first exact algorithms for DTL reconciliation with non-binary gene trees. Specifically, we (i) show that the DTL reconciliation problem for non-binary gene trees is fixed-parameter tractable in the maximum degree of the gene tree, (ii) present an exponential-time, but in-practice efficient, algorithm to track and enumerate all optimal binary resolutions of a non-binary input gene tree, and (iii) apply our algorithms to a large empirical data set of over 4700 gene trees from 100 species to study the impact of gene tree uncertainty on DTL-reconciliation and to demonstrate the applicability and utility of our algorithms. The new techniques and algorithms introduced in this paper will help biologists avoid incorrect evolutionary inferences caused by gene tree uncertainty.
NASA Astrophysics Data System (ADS)
Nikitin, I. A.; Sherstnev, V. S.; Sherstneva, A. I.; Botygin, I. A.
2017-02-01
The results of the research of existent routing protocols in wireless networks and their main features are discussed in the paper. Basing on the protocol data, the routing protocols in wireless networks, including search routing algorithms and phone directory exchange algorithms, are designed with the ‘WiFi-Direct’ technology. Algorithms without IP-protocol were designed, and that enabled one to increase the efficiency of the algorithms while working only with the MAC-addresses of the devices. The developed algorithms are expected to be used in the mobile software engineering with the Android platform taken as base. Easier algorithms and formats of the well-known route protocols, rejection of the IP-protocols enables to use the developed protocols on more primitive mobile devices. Implementation of the protocols to the engineering industry enables to create data transmission networks among working places and mobile robots without any access points.
SeqCompress: an algorithm for biological sequence compression.
Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan
2014-10-01
The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.
Real-time image dehazing using local adaptive neighborhoods and dark-channel-prior
NASA Astrophysics Data System (ADS)
Valderrama, Jesus A.; Díaz-Ramírez, Víctor H.; Kober, Vitaly; Hernandez, Enrique
2015-09-01
A real-time algorithm for single image dehazing is presented. The algorithm is based on calculation of local neighborhoods of a hazed image inside a moving window. The local neighborhoods are constructed by computing rank-order statistics. Next the dark-channel-prior approach is applied to the local neighborhoods to estimate the transmission function of the scene. By using the suggested approach there is no need for applying a refining algorithm to the estimated transmission such as the soft matting algorithm. To achieve high-rate signal processing the proposed algorithm is implemented exploiting massive parallelism on a graphics processing unit (GPU). Computer simulation results are carried out to test the performance of the proposed algorithm in terms of dehazing efficiency and speed of processing. These tests are performed using several synthetic and real images. The obtained results are analyzed and compared with those obtained with existing dehazing algorithms.
Improved algorithms for estimating Total Alkalinity in Northern Gulf of Mexico
NASA Astrophysics Data System (ADS)
Devkota, M.; Dash, P.
2017-12-01
Ocean Acidification (OA) is one of the serious challenges that have significant impacts on ocean. About 25% of anthropologically generated CO2 is absorbed by the oceans which decreases average ocean pH. This change has critical impacts on marine species, ocean ecology, and associated economics. 35 years of observation concluded that the rate of alteration in OA parameters varies geographically with higher variations in the northern Gulf of Mexico (N-GoM). Several studies have suggested that the Mississippi River affects the carbon dynamics of the N-GoM coastal ecosystem significantly. Total Alkalinity (TA) algorithms developed for major ocean basins produce inaccurate estimations in this region. Hence, a local algorithm to estimate TA is the need for this region, which would incorporate the local effects of oceanographic processes and complex spatial influences. In situ data collected in N-GoM region during the GOMECC-I and II cruises, and GISR Cruises (G-1, 3, 5) from 2007 to 2013 were assimilated and used to calculate the efficiency of the existing TA algorithm that uses Sea Surface Temperature (SST) and Sea Surface Salinity (SSS) as explanatory variables. To improve this algorithm, firstly, statistical analyses were performed to improve the coefficients and the functional form of this algorithm. Then, chlorophyll a (Chl-a) was included as an additional explanatory variable in the multiple linear regression approach in addition to SST and SSS. Based on the average concentration of Chl-a for last 15 years, the N-GoM was divided into two regions, and two separate algorithms were developed for each region. Finally, to address spatial non-stationarity, a Geographically Weighted Regression (GWR) algorithm was developed. The existing TA algorithm resulted considerable algorithm bias with a larger bias in the coastal waters. Chl-a as an additional explanatory variable reduced the bias in the residuals and improved the algorithm efficiency. Chl-a worked as a proxy for addressing the organic pump's pronounced effects in the coastal waters. The GWR algorithm provided a raster surface of the coefficients with even more reliable algorithms to estimate TA with least error. The GWR algorithm addressed the spatial non-stationarity of OA in N-GoM, which apparently was not addressed in the previously developed algorithms.
The high performance parallel algorithm for Unified Gas-Kinetic Scheme
NASA Astrophysics Data System (ADS)
Li, Shiyi; Li, Qibing; Fu, Song; Xu, Jinxiu
2016-11-01
A high performance parallel algorithm for UGKS is developed to simulate three-dimensional flows internal and external on arbitrary grid system. The physical domain and velocity domain are divided into different blocks and distributed according to the two-dimensional Cartesian topology with intra-communicators in physical domain for data exchange and other intra-communicators in velocity domain for sum reduction to moment integrals. Numerical results of three-dimensional cavity flow and flow past a sphere agree well with the results from the existing studies and validate the applicability of the algorithm. The scalability of the algorithm is tested both on small (1-16) and large (729-5832) scale processors. The tested speed-up ratio is near linear ashind thus the efficiency is around 1, which reveals the good scalability of the present algorithm.
Development and Evaluation of an Order-N Formulation for Multi-Flexible Body Space Systems
NASA Technical Reports Server (NTRS)
Ghosh, Tushar K.; Quiocho, Leslie J.
2013-01-01
This paper presents development of a generic recursive Order-N algorithm for systems with rigid and flexible bodies, in tree or closed-loop topology, with N being the number of bodies of the system. Simulation results are presented for several test cases to verify and evaluate the performance of the code compared to an existing efficient dense mass matrix-based code. The comparison brought out situations where Order-N or mass matrix-based algorithms could be useful.
Cengizci, Süleyman; Atay, Mehmet Tarık; Eryılmaz, Aytekin
2016-01-01
This paper is concerned with two-point boundary value problems for singularly perturbed nonlinear ordinary differential equations. The case when the solution only has one boundary layer is examined. An efficient method so called Successive Complementary Expansion Method (SCEM) is used to obtain uniformly valid approximations to this kind of solutions. Four test problems are considered to check the efficiency and accuracy of the proposed method. The numerical results are found in good agreement with exact and existing solutions in literature. The results confirm that SCEM has a superiority over other existing methods in terms of easy-applicability and effectiveness.
Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo
2012-12-01
The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.
An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.
Yoon, Yourim; Kim, Yong-Hyuk
2013-10-01
Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.
Linear feature detection algorithm for astronomical surveys - I. Algorithm description
NASA Astrophysics Data System (ADS)
Bektešević, Dino; Vinković, Dejan
2017-11-01
Computer vision algorithms are powerful tools in astronomical image analyses, especially when automation of object detection and extraction is required. Modern object detection algorithms in astronomy are oriented towards detection of stars and galaxies, ignoring completely the detection of existing linear features. With the emergence of wide-field sky surveys, linear features attract scientific interest as possible trails of fast flybys of near-Earth asteroids and meteors. In this work, we describe a new linear feature detection algorithm designed specifically for implementation in big data astronomy. The algorithm combines a series of algorithmic steps that first remove other objects (stars and galaxies) from the image and then enhance the line to enable more efficient line detection with the Hough algorithm. The rate of false positives is greatly reduced thanks to a step that replaces possible line segments with rectangles and then compares lines fitted to the rectangles with the lines obtained directly from the image. The speed of the algorithm and its applicability in astronomical surveys are also discussed.
A splay tree-based approach for efficient resource location in P2P networks.
Zhou, Wei; Tan, Zilong; Yao, Shaowen; Wang, Shipu
2014-01-01
Resource location in structured P2P system has a critical influence on the system performance. Existing analytical studies of Chord protocol have shown some potential improvements in performance. In this paper a splay tree-based new Chord structure called SChord is proposed to improve the efficiency of locating resources. We consider a novel implementation of the Chord finger table (routing table) based on the splay tree. This approach extends the Chord finger table with additional routing entries. Adaptive routing algorithm is proposed for implementation, and it can be shown that hop count is significantly minimized without introducing any other protocol overheads. We analyze the hop count of the adaptive routing algorithm, as compared to Chord variants, and demonstrate sharp upper and lower bounds for both worst-case and average case settings. In addition, we theoretically analyze the hop reducing in SChord and derive the fact that SChord can significantly reduce the routing hops as compared to Chord. Several simulations are presented to evaluate the performance of the algorithm and support our analytical findings. The simulation results show the efficiency of SChord.
GAMBIT: A Parameterless Model-Based Evolutionary Algorithm for Mixed-Integer Problems.
Sadowski, Krzysztof L; Thierens, Dirk; Bosman, Peter A N
2018-01-01
Learning and exploiting problem structure is one of the key challenges in optimization. This is especially important for black-box optimization (BBO) where prior structural knowledge of a problem is not available. Existing model-based Evolutionary Algorithms (EAs) are very efficient at learning structure in both the discrete, and in the continuous domain. In this article, discrete and continuous model-building mechanisms are integrated for the Mixed-Integer (MI) domain, comprising discrete and continuous variables. We revisit a recently introduced model-based evolutionary algorithm for the MI domain, the Genetic Algorithm for Model-Based mixed-Integer opTimization (GAMBIT). We extend GAMBIT with a parameterless scheme that allows for practical use of the algorithm without the need to explicitly specify any parameters. We furthermore contrast GAMBIT with other model-based alternatives. The ultimate goal of processing mixed dependences explicitly in GAMBIT is also addressed by introducing a new mechanism for the explicit exploitation of mixed dependences. We find that processing mixed dependences with this novel mechanism allows for more efficient optimization. We further contrast the parameterless GAMBIT with Mixed-Integer Evolution Strategies (MIES) and other state-of-the-art MI optimization algorithms from the General Algebraic Modeling System (GAMS) commercial algorithm suite on problems with and without constraints, and show that GAMBIT is capable of solving problems where variable dependences prevent many algorithms from successfully optimizing them.
A robust multilevel simultaneous eigenvalue solver
NASA Technical Reports Server (NTRS)
Costiner, Sorin; Taasan, Shlomo
1993-01-01
Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
A pruning algorithm for Meta-blocking based on cumulative weight
NASA Astrophysics Data System (ADS)
Zhang, Fulin; Gao, Zhipeng; Niu, Kun
2017-08-01
Entity Resolution is an important process in data cleaning and data integration. It usually employs a blocking method to avoid the quadratic complexity work when scales to large data sets. Meta-blocking can perform better in the context of highly heterogeneous information spaces. Yet, its precision and efficiency still have room to improve. In this paper, we present a new pruning algorithm for Meta-Blocking. It can achieve a higher precision than the existing WEP algorithm at a small cost of recall. In addition, can reduce the runtime of the blocking process. We evaluate our proposed method over five real-world data sets.
An Extended Deterministic Dendritic Cell Algorithm for Dynamic Job Shop Scheduling
NASA Astrophysics Data System (ADS)
Qiu, X. N.; Lau, H. Y. K.
The problem of job shop scheduling in a dynamic environment where random perturbation exists in the system is studied. In this paper, an extended deterministic Dendritic Cell Algorithm (dDCA) is proposed to solve such a dynamic Job Shop Scheduling Problem (JSSP) where unexpected events occurred randomly. This algorithm is designed based on dDCA and makes improvements by considering all types of signals and the magnitude of the output values. To evaluate this algorithm, ten benchmark problems are chosen and different kinds of disturbances are injected randomly. The results show that the algorithm performs competitively as it is capable of triggering the rescheduling process optimally with much less run time for deciding the rescheduling action. As such, the proposed algorithm is able to minimize the rescheduling times under the defined objective and to keep the scheduling process stable and efficient.
SAR correlation technique - An algorithm for processing data with large range walk
NASA Technical Reports Server (NTRS)
Jin, M.; Wu, C.
1983-01-01
This paper presents an algorithm for synthetic aperture radar (SAR) azimuth correlation with extraneously large range migration effect which can not be accommodated by the existing frequency domain interpolation approach used in current SEASAT SAR processing. A mathematical model is first provided for the SAR point-target response in both the space (or time) and the frequency domain. A simple and efficient processing algorithm derived from the hybrid algorithm is then given. This processing algorithm enables azimuth correlation by two steps. The first step is a secondary range compression to handle the dispersion of the spectra of the azimuth response along range. The second step is the well-known frequency domain range migration correction approach for the azimuth compression. This secondary range compression can be processed simultaneously with range pulse compression. Simulation results provided here indicate that this processing algorithm yields a satisfactory compressed impulse response for SAR data with large range migration.
Information-based management mode based on value network analysis for livestock enterprises
NASA Astrophysics Data System (ADS)
Liu, Haoqi; Lee, Changhoon; Han, Mingming; Su, Zhongbin; Padigala, Varshinee Anu; Shen, Weizheng
2018-01-01
With the development of computer and IT technologies, enterprise management has gradually become information-based management. Moreover, due to poor technical competence and non-uniform management, most breeding enterprises show a lack of organisation in data collection and management. In addition, low levels of efficiency result in increasing production costs. This paper adopts 'struts2' in order to construct an information-based management system for standardised and normalised management within the process of production in beef cattle breeding enterprises. We present a radio-frequency identification system by studying multiple-tag anti-collision via a dynamic grouping ALOHA algorithm. This algorithm is based on the existing ALOHA algorithm and uses an improved packet dynamic of this algorithm, which is characterised by a high-throughput rate. This new algorithm can reach a throughput 42% higher than that of the general ALOHA algorithm. With a change in the number of tags, the system throughput is relatively stable.
Online Community Detection for Large Complex Networks
Pan, Gang; Zhang, Wangsheng; Wu, Zhaohui; Li, Shijian
2014-01-01
Complex networks describe a wide range of systems in nature and society. To understand complex networks, it is crucial to investigate their community structure. In this paper, we develop an online community detection algorithm with linear time complexity for large complex networks. Our algorithm processes a network edge by edge in the order that the network is fed to the algorithm. If a new edge is added, it just updates the existing community structure in constant time, and does not need to re-compute the whole network. Therefore, it can efficiently process large networks in real time. Our algorithm optimizes expected modularity instead of modularity at each step to avoid poor performance. The experiments are carried out using 11 public data sets, and are measured by two criteria, modularity and NMI (Normalized Mutual Information). The results show that our algorithm's running time is less than the commonly used Louvain algorithm while it gives competitive performance. PMID:25061683
Global Contrast Based Salient Region Detection.
Cheng, Ming-Ming; Mitra, Niloy J; Huang, Xiaolei; Torr, Philip H S; Hu, Shi-Min
2015-03-01
Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N
2015-04-28
Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.
PCA-LBG-based algorithms for VQ codebook generation
NASA Astrophysics Data System (ADS)
Tsai, Jinn-Tsong; Yang, Po-Yuan
2015-04-01
Vector quantisation (VQ) codebooks are generated by combining principal component analysis (PCA) algorithms with Linde-Buzo-Gray (LBG) algorithms. All training vectors are grouped according to the projected values of the principal components. The PCA-LBG-based algorithms include (1) PCA-LBG-Median, which selects the median vector of each group, (2) PCA-LBG-Centroid, which adopts the centroid vector of each group, and (3) PCA-LBG-Random, which randomly selects a vector of each group. The LBG algorithm finds a codebook based on the better vectors sent to an initial codebook by the PCA. The PCA performs an orthogonal transformation to convert a set of potentially correlated variables into a set of variables that are not linearly correlated. Because the orthogonal transformation efficiently distinguishes test image vectors, the proposed PCA-LBG-based algorithm is expected to outperform conventional algorithms in designing VQ codebooks. The experimental results confirm that the proposed PCA-LBG-based algorithms indeed obtain better results compared to existing methods reported in the literature.
Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le
2016-07-14
Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don't discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability.
Jiang, Peng; Liu, Shuai; Liu, Jun; Wu, Feng; Zhang, Le
2016-01-01
Most of the existing node depth-adjustment deployment algorithms for underwater wireless sensor networks (UWSNs) just consider how to optimize network coverage and connectivity rate. However, these literatures don’t discuss full network connectivity, while optimization of network energy efficiency and network reliability are vital topics for UWSN deployment. Therefore, in this study, a depth-adjustment deployment algorithm based on two-dimensional (2D) convex hull and spanning tree (NDACS) for UWSNs is proposed. First, the proposed algorithm uses the geometric characteristics of a 2D convex hull and empty circle to find the optimal location of a sleep node and activate it, minimizes the network coverage overlaps of the 2D plane, and then increases the coverage rate until the first layer coverage threshold is reached. Second, the sink node acts as a root node of all active nodes on the 2D convex hull and then forms a small spanning tree gradually. Finally, the depth-adjustment strategy based on time marker is used to achieve the three-dimensional overall network deployment. Compared with existing depth-adjustment deployment algorithms, the simulation results show that the NDACS algorithm can maintain full network connectivity with high network coverage rate, as well as improved network average node degree, thus increasing network reliability. PMID:27428970
Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.
Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich
2016-01-01
We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Qyyum, Muhammad Abdul; Long, Nguyen Van Duc; Minh, Le Quang; Lee, Moonyong
2018-01-01
Design optimization of the single mixed refrigerant (SMR) natural gas liquefaction (LNG) process involves highly non-linear interactions between decision variables, constraints, and the objective function. These non-linear interactions lead to an irreversibility, which deteriorates the energy efficiency of the LNG process. In this study, a simple and highly efficient hybrid modified coordinate descent (HMCD) algorithm was proposed to cope with the optimization of the natural gas liquefaction process. The single mixed refrigerant process was modeled in Aspen Hysys® and then connected to a Microsoft Visual Studio environment. The proposed optimization algorithm provided an improved result compared to the other existing methodologies to find the optimal condition of the complex mixed refrigerant natural gas liquefaction process. By applying the proposed optimization algorithm, the SMR process can be designed with the 0.2555 kW specific compression power which is equivalent to 44.3% energy saving as compared to the base case. Furthermore, in terms of coefficient of performance (COP), it can be enhanced up to 34.7% as compared to the base case. The proposed optimization algorithm provides a deep understanding of the optimization of the liquefaction process in both technical and numerical perspectives. In addition, the HMCD algorithm can be employed to any mixed refrigerant based liquefaction process in the natural gas industry.
Node fingerprinting: an efficient heuristic for aligning biological networks.
Radu, Alex; Charleston, Michael
2014-10-01
With the continuing increase in availability of biological data and improvements to biological models, biological network analysis has become a promising area of research. An emerging technique for the analysis of biological networks is through network alignment. Network alignment has been used to calculate genetic distance, similarities between regulatory structures, and the effect of external forces on gene expression, and to depict conditional activity of expression modules in cancer. Network alignment is algorithmically complex, and therefore we must rely on heuristics, ideally as efficient and accurate as possible. The majority of current techniques for network alignment rely on precomputed information, such as with protein sequence alignment, or on tunable network alignment parameters, which may introduce an increased computational overhead. Our presented algorithm, which we call Node Fingerprinting (NF), is appropriate for performing global pairwise network alignment without precomputation or tuning, can be fully parallelized, and is able to quickly compute an accurate alignment between two biological networks. It has performed as well as or better than existing algorithms on biological and simulated data, and with fewer computational resources. The algorithmic validation performed demonstrates the low computational resource requirements of NF.
Biyikli, Emre; To, Albert C.
2015-01-01
A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
Wu, Yufeng
2012-03-01
Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.
Hamiltonian Monte Carlo acceleration using surrogate functions with random bases.
Zhang, Cheng; Shahbaba, Babak; Zhao, Hongkai
2017-11-01
For big data analysis, high computational cost for Bayesian methods often limits their applications in practice. In recent years, there have been many attempts to improve computational efficiency of Bayesian inference. Here we propose an efficient and scalable computational technique for a state-of-the-art Markov chain Monte Carlo methods, namely, Hamiltonian Monte Carlo. The key idea is to explore and exploit the structure and regularity in parameter space for the underlying probabilistic model to construct an effective approximation of its geometric properties. To this end, we build a surrogate function to approximate the target distribution using properly chosen random bases and an efficient optimization process. The resulting method provides a flexible, scalable, and efficient sampling algorithm, which converges to the correct target distribution. We show that by choosing the basis functions and optimization process differently, our method can be related to other approaches for the construction of surrogate functions such as generalized additive models or Gaussian process models. Experiments based on simulated and real data show that our approach leads to substantially more efficient sampling algorithms compared to existing state-of-the-art methods.
A lightweight QRS detector for single lead ECG signals using a max-min difference algorithm.
Pandit, Diptangshu; Zhang, Li; Liu, Chengyu; Chattopadhyay, Samiran; Aslam, Nauman; Lim, Chee Peng
2017-06-01
Detection of the R-peak pertaining to the QRS complex of an ECG signal plays an important role for the diagnosis of a patient's heart condition. To accurately identify the QRS locations from the acquired raw ECG signals, we need to handle a number of challenges, which include noise, baseline wander, varying peak amplitudes, and signal abnormality. This research aims to address these challenges by developing an efficient lightweight algorithm for QRS (i.e., R-peak) detection from raw ECG signals. A lightweight real-time sliding window-based Max-Min Difference (MMD) algorithm for QRS detection from Lead II ECG signals is proposed. Targeting to achieve the best trade-off between computational efficiency and detection accuracy, the proposed algorithm consists of five key steps for QRS detection, namely, baseline correction, MMD curve generation, dynamic threshold computation, R-peak detection, and error correction. Five annotated databases from Physionet are used for evaluating the proposed algorithm in R-peak detection. Integrated with a feature extraction technique and a neural network classifier, the proposed ORS detection algorithm has also been extended to undertake normal and abnormal heartbeat detection from ECG signals. The proposed algorithm exhibits a high degree of robustness in QRS detection and achieves an average sensitivity of 99.62% and an average positive predictivity of 99.67%. Its performance compares favorably with those from the existing state-of-the-art models reported in the literature. In regards to normal and abnormal heartbeat detection, the proposed QRS detection algorithm in combination with the feature extraction technique and neural network classifier achieves an overall accuracy rate of 93.44% based on an empirical evaluation using the MIT-BIH Arrhythmia data set with 10-fold cross validation. In comparison with other related studies, the proposed algorithm offers a lightweight adaptive alternative for R-peak detection with good computational efficiency. The empirical results indicate that it not only yields a high accuracy rate in QRS detection, but also exhibits efficient computational complexity at the order of O(n), where n is the length of an ECG signal. Copyright © 2017 Elsevier B.V. All rights reserved.
DNABIT Compress - Genome compression algorithm.
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-22
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.
A hybrid algorithm for speckle noise reduction of ultrasound images.
Singh, Karamjeet; Ranade, Sukhjeet Kaur; Singh, Chandan
2017-09-01
Medical images are contaminated by multiplicative speckle noise which significantly reduce the contrast of ultrasound images and creates a negative effect on various image interpretation tasks. In this paper, we proposed a hybrid denoising approach which collaborate the both local and nonlocal information in an efficient manner. The proposed hybrid algorithm consist of three stages in which at first stage the use of local statistics in the form of guided filter is used to reduce the effect of speckle noise initially. Then, an improved speckle reducing bilateral filter (SRBF) is developed to further reduce the speckle noise from the medical images. Finally, to reconstruct the diffused edges we have used the efficient post-processing technique which jointly considered the advantages of both bilateral and nonlocal mean (NLM) filter for the attenuation of speckle noise efficiently. The performance of proposed hybrid algorithm is evaluated on synthetic, simulated and real ultrasound images. The experiments conducted on various test images demonstrate that our proposed hybrid approach outperforms the various traditional speckle reduction approaches included recently proposed NLM and optimized Bayesian-based NLM. The results of various quantitative, qualitative measures and by visual inspection of denoise synthetic and real ultrasound images demonstrate that the proposed hybrid algorithm have strong denoising capability and able to preserve the fine image details such as edge of a lesion better than previously developed methods for speckle noise reduction. The denoising and edge preserving capability of hybrid algorithm is far better than existing traditional and recently proposed speckle reduction (SR) filters. The success of proposed algorithm would help in building the lay foundation for inventing the hybrid algorithms for denoising of ultrasound images. Copyright © 2017 Elsevier B.V. All rights reserved.
Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation
NASA Astrophysics Data System (ADS)
Zhuang, Wei
Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.
Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-09-18
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System
Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-01-01
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme. PMID:28927019
Improving HVAC operational efficiency in small-and medium-size commercial buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert
Small- and medium-size (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring, or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically use packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the United States for many reasons, chief among them being to mitigate themore » climate change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short cycling, when an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and to premature failure of the compressor or its components. Also, short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this paper describes two algorithms for detecting the zone set point temperature and RTU cycling rate that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using field data from a number of RTUs from six buildings in different climate locations. Overall, the algorithms were successful in detecting the set points and ON/OFF cycles accurately using the peak detection technique. The paper describes the two algorithms, results from testing the algorithms using field data, how the algorithms can be used to improve SMBs efficiency, and presents related conclusions.« less
Negative Difference Resistance and Its Application to Construct Boolean Logic Circuits
NASA Astrophysics Data System (ADS)
Nikodem, Maciej; Bawiec, Marek A.; Surmacz, Tomasz R.
Electronic circuits based on nanodevices and quantum effect are the future of logic circuits design. Today's technology allows constructing resonant tunneling diodes, quantum cellular automata and nanowires/nanoribbons that are the elementary components of threshold gates. However, synthesizing a threshold circuit for an arbitrary logic function is still a challenging task where no efficient algorithms exist. This paper focuses on Generalised Threshold Gates (GTG), giving the overview of threshold circuit synthesis methods and presenting an algorithm that considerably simplifies the task in case of GTG circuits.
Gao, Ying; Wkram, Chris Hadri; Duan, Jiajie; Chou, Jarong
2015-01-01
In order to prolong the network lifetime, energy-efficient protocols adapted to the features of wireless sensor networks should be used. This paper explores in depth the nature of heterogeneous wireless sensor networks, and finally proposes an algorithm to address the problem of finding an effective pathway for heterogeneous clustering energy. The proposed algorithm implements cluster head selection according to the degree of energy attenuation during the network’s running and the degree of candidate nodes’ effective coverage on the whole network, so as to obtain an even energy consumption over the whole network for the situation with high degree of coverage. Simulation results show that the proposed clustering protocol has better adaptability to heterogeneous environments than existing clustering algorithms in prolonging the network lifetime. PMID:26690440
Learning Time-Varying Coverage Functions
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2015-01-01
Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data. PMID:25960624
Learning Time-Varying Coverage Functions.
Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le
2014-12-08
Coverage functions are an important class of discrete functions that capture the law of diminishing returns arising naturally from applications in social network analysis, machine learning, and algorithmic game theory. In this paper, we propose a new problem of learning time-varying coverage functions, and develop a novel parametrization of these functions using random features. Based on the connection between time-varying coverage functions and counting processes, we also propose an efficient parameter learning algorithm based on likelihood maximization, and provide a sample complexity analysis. We applied our algorithm to the influence function estimation problem in information diffusion in social networks, and show that with few assumptions about the diffusion processes, our algorithm is able to estimate influence significantly more accurately than existing approaches on both synthetic and real world data.
Huang, Yu; Guo, Feng; Li, Yongling; Liu, Yufeng
2015-01-01
Parameter estimation for fractional-order chaotic systems is an important issue in fractional-order chaotic control and synchronization and could be essentially formulated as a multidimensional optimization problem. A novel algorithm called quantum parallel particle swarm optimization (QPPSO) is proposed to solve the parameter estimation for fractional-order chaotic systems. The parallel characteristic of quantum computing is used in QPPSO. This characteristic increases the calculation of each generation exponentially. The behavior of particles in quantum space is restrained by the quantum evolution equation, which consists of the current rotation angle, individual optimal quantum rotation angle, and global optimal quantum rotation angle. Numerical simulation based on several typical fractional-order systems and comparisons with some typical existing algorithms show the effectiveness and efficiency of the proposed algorithm. PMID:25603158
Analysis of Community Detection Algorithms for Large Scale Cyber Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mane, Prachita; Shanbhag, Sunanda; Kamath, Tanmayee
The aim of this project is to use existing community detection algorithms on an IP network dataset to create supernodes within the network. This study compares the performance of different algorithms on the network in terms of running time. The paper begins with an introduction to the concept of clustering and community detection followed by the research question that the team aimed to address. Further the paper describes the graph metrics that were considered in order to shortlist algorithms followed by a brief explanation of each algorithm with respect to the graph metric on which it is based. The nextmore » section in the paper describes the methodology used by the team in order to run the algorithms and determine which algorithm is most efficient with respect to running time. Finally, the last section of the paper includes the results obtained by the team and a conclusion based on those results as well as future work.« less
Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing
2017-01-01
This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057
Improved detection of soma location and morphology in fluorescence microscopy images of neurons.
Kayasandik, Cihan Bilge; Labate, Demetrio
2016-12-01
Automated detection and segmentation of somas in fluorescent images of neurons is a major goal in quantitative studies of neuronal networks, including applications of high-content-screenings where it is required to quantify multiple morphological properties of neurons. Despite recent advances in image processing targeted to neurobiological applications, existing algorithms of soma detection are often unreliable, especially when processing fluorescence image stacks of neuronal cultures. In this paper, we introduce an innovative algorithm for the detection and extraction of somas in fluorescent images of networks of cultured neurons where somas and other structures exist in the same fluorescent channel. Our method relies on a new geometrical descriptor called Directional Ratio and a collection of multiscale orientable filters to quantify the level of local isotropy in an image. To optimize the application of this approach, we introduce a new construction of multiscale anisotropic filters that is implemented by separable convolution. Extensive numerical experiments using 2D and 3D confocal images show that our automated algorithm reliably detects somas, accurately segments them, and separates contiguous ones. We include a detailed comparison with state-of-the-art existing methods to demonstrate that our algorithm is extremely competitive in terms of accuracy, reliability and computational efficiency. Our algorithm will facilitate the development of automated platforms for high content neuron image processing. A Matlab code is released open-source and freely available to the scientific community. Copyright © 2016 Elsevier B.V. All rights reserved.
Model Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster
Improving the Numerical Stability of Fast Matrix Multiplication
Ballard, Grey; Benson, Austin R.; Druinsky, Alex; ...
2016-10-04
Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fastmore » algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.« less
Tanyimboh, Tiku T; Seyoum, Alemtsehay G
2016-12-01
This article investigates the computational efficiency of constraint handling in multi-objective evolutionary optimization algorithms for water distribution systems. The methodology investigated here encourages the co-existence and simultaneous development including crossbreeding of subpopulations of cost-effective feasible and infeasible solutions based on Pareto dominance. This yields a boundary search approach that also promotes diversity in the gene pool throughout the progress of the optimization by exploiting the full spectrum of non-dominated infeasible solutions. The relative effectiveness of small and moderate population sizes with respect to the number of decision variables is investigated also. The results reveal the optimization algorithm to be efficient, stable and robust. It found optimal and near-optimal solutions reliably and efficiently. The real-world system based optimization problem involved multiple variable head supply nodes, 29 fire-fighting flows, extended period simulation and multiple demand categories including water loss. The least cost solutions found satisfied the flow and pressure requirements consistently. The best solutions achieved indicative savings of 48.1% and 48.2% based on the cost of the pipes in the existing network, for populations of 200 and 1000, respectively. The population of 1000 achieved slightly better results overall. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
3D video coding: an overview of present and upcoming standards
NASA Astrophysics Data System (ADS)
Merkle, Philipp; Müller, Karsten; Wiegand, Thomas
2010-07-01
An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.
APRON: A Cellular Processor Array Simulation and Hardware Design Tool
NASA Astrophysics Data System (ADS)
Barr, David R. W.; Dudek, Piotr
2009-12-01
We present a software environment for the efficient simulation of cellular processor arrays (CPAs). This software (APRON) is used to explore algorithms that are designed for massively parallel fine-grained processor arrays, topographic multilayer neural networks, vision chips with SIMD processor arrays, and related architectures. The software uses a highly optimised core combined with a flexible compiler to provide the user with tools for the design of new processor array hardware architectures and the emulation of existing devices. We present performance benchmarks for the software processor array implemented on standard commodity microprocessors. APRON can be configured to use additional processing hardware if necessary and can be used as a complete graphical user interface and development environment for new or existing CPA systems, allowing more users to develop algorithms for CPA systems.
A fast efficient implicit scheme for the gasdynamic equations using a matrix reduction technique
NASA Technical Reports Server (NTRS)
Barth, T. J.; Steger, J. L.
1985-01-01
An efficient implicit finite-difference algorithm for the gasdynamic equations utilizing matrix reduction techniques is presented. A significant reduction in arithmetic operations is achieved without loss of the stability characteristics generality found in the Beam and Warming approximate factorization algorithm. Steady-state solutions to the conservative Euler equations in generalized coordinates are obtained for transonic flows and used to show that the method offers computational advantages over the conventional Beam and Warming scheme. Existing Beam and Warming codes can be retrofit with minimal effort. The theoretical extension of the matrix reduction technique to the full Navier-Stokes equations in Cartesian coordinates is presented in detail. Linear stability, using a Fourier stability analysis, is demonstrated and discussed for the one-dimensional Euler equations.
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
Scaling deep learning on GPU and knights landing clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Buluc, Aydin; Demmel, James
Training neural networks has become a big bottleneck. For example, training ImageNet dataset on one Nvidia K20 GPU needs 21 days. To speed up the training process, the current deep learning systems heavily rely on the hardware accelerators. However, these accelerators have limited on-chip memory compared with CPUs. We use both self-host Intel Knights Landing (KNL) clusters and multi-GPU clusters as our target platforms. From the algorithm aspect, we focus on Elastic Averaging SGD (EASGD) to design algorithms for HPC clusters. We redesign four efficient algorithms for HPC systems to improve EASGD's poor scaling on clusters. Async EASGD, Async MEASGD,more » and Hogwild EASGD are faster than existing counter-part methods (Async SGD, Async MSGD, and Hogwild SGD) in all comparisons. Sync EASGD achieves 5.3X speedup over original EASGD on the same platform. We achieve 91.5% weak scaling efficiency on 4253 KNL cores, which is higher than the state-of-the-art implementation.« less
MultiNest: Efficient and Robust Bayesian Inference
NASA Astrophysics Data System (ADS)
Feroz, F.; Hobson, M. P.; Bridges, M.
2011-09-01
We present further development and the first public release of our multimodal nested sampling algorithm, called MultiNest. This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson (2008), which itself significantly outperformed existing MCMC techniques in a wide range of astrophysical inference problems. The accuracy and economy of the MultiNest algorithm is demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla LambdaCDM model to include spatial curvature and a varying equation of state for dark energy. The MultiNest software is fully parallelized using MPI and includes an interface to CosmoMC. It will also be released as part of the SuperBayeS package, for the analysis of supersymmetric theories of particle physics, at this http URL.
Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric
2013-06-01
Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph--a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer's disease (AD) and reveal findings that could lead to advancements in AD research.
Huang, Shuai; Li, Jing; Ye, Jieping; Fleisher, Adam; Chen, Kewei; Wu, Teresa; Reiman, Eric
2014-01-01
Structure learning of Bayesian Networks (BNs) is an important topic in machine learning. Driven by modern applications in genetics and brain sciences, accurate and efficient learning of large-scale BN structures from high-dimensional data becomes a challenging problem. To tackle this challenge, we propose a Sparse Bayesian Network (SBN) structure learning algorithm that employs a novel formulation involving one L1-norm penalty term to impose sparsity and another penalty term to ensure that the learned BN is a Directed Acyclic Graph (DAG)—a required property of BNs. Through both theoretical analysis and extensive experiments on 11 moderate and large benchmark networks with various sample sizes, we show that SBN leads to improved learning accuracy, scalability, and efficiency as compared with 10 existing popular BN learning algorithms. We apply SBN to a real-world application of brain connectivity modeling for Alzheimer’s disease (AD) and reveal findings that could lead to advancements in AD research. PMID:22665720
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Zhou, Hong; Zhou, Michael; Li, Daisy; Manthey, Joseph; Lioutikova, Ekaterina; Wang, Hong; Zeng, Xiao
2017-11-17
The beauty and power of the genome editing mechanism, CRISPR Cas9 endonuclease system, lies in the fact that it is RNA-programmable such that Cas9 can be guided to any genomic loci complementary to a 20-nt RNA, single guide RNA (sgRNA), to cleave double stranded DNA, allowing the introduction of wanted mutations. Unfortunately, it has been reported repeatedly that the sgRNA can also guide Cas9 to off-target sites where the DNA sequence is homologous to sgRNA. Using human genome and Streptococcus pyogenes Cas9 (SpCas9) as an example, this article mathematically analyzed the probabilities of off-target homologies of sgRNAs and discovered that for large genome size such as human genome, potential off-target homologies are inevitable for sgRNA selection. A highly efficient computationl algorithm was developed for whole genome sgRNA design and off-target homology searches. By means of a dynamically constructed sequence-indexed database and a simplified sequence alignment method, this algorithm achieves very high efficiency while guaranteeing the identification of all existing potential off-target homologies. Via this algorithm, 1,876,775 sgRNAs were designed for the 19,153 human mRNA genes and only two sgRNAs were found to be free of off-target homology. By means of the novel and efficient sgRNA homology search algorithm introduced in this article, genome wide sgRNA design and off-target analysis were conducted and the results confirmed the mathematical analysis that for a sgRNA sequence, it is almost impossible to escape potential off-target homologies. Future innovations on the CRISPR Cas9 gene editing technology need to focus on how to eliminate the Cas9 off-target activity.
Energy-Efficient Deadline-Aware Data-Gathering Scheme Using Multiple Mobile Data Collectors.
Dasgupta, Rumpa; Yoon, Seokhoon
2017-04-01
In wireless sensor networks, the data collected by sensors are usually forwarded to the sink through multi-hop forwarding. However, multi-hop forwarding can be inefficient due to the energy hole problem and high communications overhead. Moreover, when the monitored area is large and the number of sensors is small, sensors cannot send the data via multi-hop forwarding due to the lack of network connectivity. In order to address those problems of multi-hop forwarding, in this paper, we consider a data collection scheme that uses mobile data collectors (MDCs), which visit sensors and collect data from them. Due to the recent breakthroughs in wireless power transfer technology, MDCs can also be used to recharge the sensors to keep them from draining their energy. In MDC-based data-gathering schemes, a big challenge is how to find the MDCs' traveling paths in a balanced way, such that their energy consumption is minimized and the packet-delay constraint is satisfied. Therefore, in this paper, we aim at finding the MDCs' paths, taking energy efficiency and delay constraints into account. We first define an optimization problem, named the delay-constrained energy minimization (DCEM) problem, to find the paths for MDCs. An integer linear programming problem is formulated to find the optimal solution. We also propose a two-phase path-selection algorithm to efficiently solve the DCEM problem. Simulations are performed to compare the performance of the proposed algorithms with two heuristics algorithms for the vehicle routing problem under various scenarios. The simulation results show that the proposed algorithms can outperform existing algorithms in terms of energy efficiency and packet delay.
Energy-Efficient Deadline-Aware Data-Gathering Scheme Using Multiple Mobile Data Collectors
Dasgupta, Rumpa; Yoon, Seokhoon
2017-01-01
In wireless sensor networks, the data collected by sensors are usually forwarded to the sink through multi-hop forwarding. However, multi-hop forwarding can be inefficient due to the energy hole problem and high communications overhead. Moreover, when the monitored area is large and the number of sensors is small, sensors cannot send the data via multi-hop forwarding due to the lack of network connectivity. In order to address those problems of multi-hop forwarding, in this paper, we consider a data collection scheme that uses mobile data collectors (MDCs), which visit sensors and collect data from them. Due to the recent breakthroughs in wireless power transfer technology, MDCs can also be used to recharge the sensors to keep them from draining their energy. In MDC-based data-gathering schemes, a big challenge is how to find the MDCs’ traveling paths in a balanced way, such that their energy consumption is minimized and the packet-delay constraint is satisfied. Therefore, in this paper, we aim at finding the MDCs’ paths, taking energy efficiency and delay constraints into account. We first define an optimization problem, named the delay-constrained energy minimization (DCEM) problem, to find the paths for MDCs. An integer linear programming problem is formulated to find the optimal solution. We also propose a two-phase path-selection algorithm to efficiently solve the DCEM problem. Simulations are performed to compare the performance of the proposed algorithms with two heuristics algorithms for the vehicle routing problem under various scenarios. The simulation results show that the proposed algorithms can outperform existing algorithms in terms of energy efficiency and packet delay. PMID:28368300
A hybrid framework for coupling arbitrary summation-by-parts schemes on general meshes
NASA Astrophysics Data System (ADS)
Lundquist, Tomas; Malan, Arnaud; Nordström, Jan
2018-06-01
We develop a general interface procedure to couple both structured and unstructured parts of a hybrid mesh in a non-collocated, multi-block fashion. The target is to gain optimal computational efficiency in fluid dynamics simulations involving complex geometries. While guaranteeing stability, the proposed procedure is optimized for accuracy and requires minimal algorithmic modifications to already existing schemes. Initial numerical investigations confirm considerable efficiency gains compared to non-hybrid calculations of up to an order of magnitude.
Spectral matching technology for light-emitting diode-based jaundice photodynamic therapy device
NASA Astrophysics Data System (ADS)
Gan, Ru-ting; Guo, Zhen-ning; Lin, Jie-ben
2015-02-01
The objective of this paper is to obtain the spectrum of light-emitting diode (LED)-based jaundice photodynamic therapy device (JPTD), the bilirubin absorption spectrum in vivo was regarded as target spectrum. According to the spectral constructing theory, a simple genetic algorithm as the spectral matching algorithm was first proposed in this study. The optimal combination ratios of LEDs were obtained, and the required LEDs number was then calculated. Meanwhile, the algorithm was compared with the existing spectral matching algorithms. The results show that this algorithm runs faster with higher efficiency, the switching time consumed is 2.06 s, and the fitting spectrum is very similar to the target spectrum with 98.15% matching degree. Thus, blue LED-based JPTD can replace traditional blue fluorescent tube, the spectral matching technology that has been put forward can be applied to the light source spectral matching for jaundice photodynamic therapy and other medical phototherapy.
Non-Convex Sparse and Low-Rank Based Robust Subspace Segmentation for Data Mining.
Cheng, Wenlong; Zhao, Mingbo; Xiong, Naixue; Chui, Kwok Tai
2017-07-15
Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l ₁-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating l p -norm and Schatten p -norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.
Joint demosaicking and zooming using moderate spectral correlation and consistent edge map
NASA Astrophysics Data System (ADS)
Zhou, Dengwen; Dong, Weiming; Chen, Wengang
2014-07-01
The recently published joint demosaicking and zooming algorithms for single-sensor digital cameras all overfit the popular Kodak test images, which have been found to have higher spectral correlation than typical color images. Their performance perhaps significantly degrades on other datasets, such as the McMaster test images, which have weak spectral correlation. A new joint demosaicking and zooming algorithm is proposed for the Bayer color filter array (CFA) pattern, in which the edge direction information (edge map) extracted from the raw CFA data is consistently used in demosaicking and zooming. It also moderately utilizes the spectral correlation between color planes. The experimental results confirm that the proposed algorithm produces an excellent performance on both the Kodak and McMaster datasets in terms of both subjective and objective measures. Our algorithm also has high computational efficiency. It provides a better tradeoff among adaptability, performance, and computational cost compared to the existing algorithms.
Multiparty Quantum Key Agreement Based on Quantum Search Algorithm
Cao, Hao; Ma, Wenping
2017-01-01
Quantum key agreement is an important topic that the shared key must be negotiated equally by all participants, and any nontrivial subset of participants cannot fully determine the shared key. To date, the embed modes of subkey in all the previously proposed quantum key agreement protocols are based on either BB84 or entangled states. The research of the quantum key agreement protocol based on quantum search algorithms is still blank. In this paper, on the basis of investigating the properties of quantum search algorithms, we propose the first quantum key agreement protocol whose embed mode of subkey is based on a quantum search algorithm known as Grover’s algorithm. A novel example of protocols with 5 – party is presented. The efficiency analysis shows that our protocol is prior to existing MQKA protocols. Furthermore it is secure against both external attack and internal attacks. PMID:28332610
The Ordered Clustered Travelling Salesman Problem: A Hybrid Genetic Algorithm
Ahmed, Zakir Hussain
2014-01-01
The ordered clustered travelling salesman problem is a variation of the usual travelling salesman problem in which a set of vertices (except the starting vertex) of the network is divided into some prespecified clusters. The objective is to find the least cost Hamiltonian tour in which vertices of any cluster are visited contiguously and the clusters are visited in the prespecified order. The problem is NP-hard, and it arises in practical transportation and sequencing problems. This paper develops a hybrid genetic algorithm using sequential constructive crossover, 2-opt search, and a local search for obtaining heuristic solution to the problem. The efficiency of the algorithm has been examined against two existing algorithms for some asymmetric and symmetric TSPLIB instances of various sizes. The computational results show that the proposed algorithm is very effective in terms of solution quality and computational time. Finally, we present solution to some more symmetric TSPLIB instances. PMID:24701148
A general optimality criteria algorithm for a class of engineering optimization problems
NASA Astrophysics Data System (ADS)
Belegundu, Ashok D.
2015-05-01
An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.
Ramanujam, Nedunchelian; Kaliappan, Manivannan
2016-01-01
Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.
2017-01-01
Astronomical radio signals are subjected to phase dispersion while traveling through the interstellar medium. To optimally detect a short-duration signal within a frequency band, we have to precisely compensate for the unknown pulse dispersion, which is a computationally demanding task. We present the “fast dispersion measure transform” algorithm for optimal detection of such signals. Our algorithm has a low theoretical complexity of 2{N}f{N}t+{N}t{N}{{Δ }}{{log}}2({N}f), where Nf, Nt, and NΔ are the numbers of frequency bins, time bins, and dispersion measure bins, respectively. Unlike previously suggested fast algorithms, our algorithm conserves the sensitivity of brute-force dedispersion. Our tests indicate that this algorithm, running on a standard desktop computer and implemented in a high-level programming language, is already faster than the state-of-the-art dedispersion codes running on graphical processing units (GPUs). We also present a variant of the algorithm that can be efficiently implemented on GPUs. The latter algorithm’s computation and data-transport requirements are similar to those of a two-dimensional fast Fourier transform, indicating that incoherent dedispersion can now be considered a nonissue while planning future surveys. We further present a fast algorithm for sensitive detection of pulses shorter than the dispersive smearing limits of incoherent dedispersion. In typical cases, this algorithm is orders of magnitude faster than enumerating dispersion measures and coherently dedispersing by convolution. We analyze the computational complexity of pulsed signal searches by radio interferometers. We conclude that, using our suggested algorithms, maximally sensitive blind searches for dispersed pulses are feasible using existing facilities. We provide an implementation of these algorithms in Python and MATLAB.
Kohlmayer, Florian; Prasser, Fabian; Kuhn, Klaus A
2015-12-01
With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G; Khanna, Sanjeev
2017-06-01
Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings.
Mamouras, Konstantinos; Raghothaman, Mukund; Alur, Rajeev; Ives, Zachary G.; Khanna, Sanjeev
2017-01-01
Real-time decision making in emerging IoT applications typically relies on computing quantitative summaries of large data streams in an efficient and incremental manner. To simplify the task of programming the desired logic, we propose StreamQRE, which provides natural and high-level constructs for processing streaming data. Our language has a novel integration of linguistic constructs from two distinct programming paradigms: streaming extensions of relational query languages and quantitative extensions of regular expressions. The former allows the programmer to employ relational constructs to partition the input data by keys and to integrate data streams from different sources, while the latter can be used to exploit the logical hierarchy in the input stream for modular specifications. We first present the core language with a small set of combinators, formal semantics, and a decidable type system. We then show how to express a number of common patterns with illustrative examples. Our compilation algorithm translates the high-level query into a streaming algorithm with precise complexity bounds on per-item processing time and total memory footprint. We also show how to integrate approximation algorithms into our framework. We report on an implementation in Java, and evaluate it with respect to existing high-performance engines for processing streaming data. Our experimental evaluation shows that (1) StreamQRE allows more natural and succinct specification of queries compared to existing frameworks, (2) the throughput of our implementation is higher than comparable systems (for example, two-to-four times greater than RxJava), and (3) the approximation algorithms supported by our implementation can lead to substantial memory savings. PMID:29151821
Mehranfar, Adele; Ghadiri, Nasser; Kouhsar, Morteza; Golshani, Ashkan
2017-09-01
Detecting the protein complexes is an important task in analyzing the protein interaction networks. Although many algorithms predict protein complexes in different ways, surveys on the interaction networks indicate that about 50% of detected interactions are false positives. Consequently, the accuracy of existing methods needs to be improved. In this paper we propose a novel algorithm to detect the protein complexes in 'noisy' protein interaction data. First, we integrate several biological data sources to determine the reliability of each interaction and determine more accurate weights for the interactions. A data fusion component is used for this step, based on the interval type-2 fuzzy voter that provides an efficient combination of the information sources. This fusion component detects the errors and diminishes their effect on the detection protein complexes. So in the first step, the reliability scores have been assigned for every interaction in the network. In the second step, we have proposed a general protein complex detection algorithm by exploiting and adopting the strong points of other algorithms and existing hypotheses regarding real complexes. Finally, the proposed method has been applied for the yeast interaction datasets for predicting the interactions. The results show that our framework has a better performance regarding precision and F-measure than the existing approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.
Efficient methods for overlapping group lasso.
Yuan, Lei; Liu, Jun; Ye, Jieping
2013-09-01
The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.
Theoretical Bounds of Direct Binary Search Halftoning.
Liao, Jan-Ray
2015-11-01
Direct binary search (DBS) produces the images of the best quality among half-toning algorithms. The reason is that it minimizes the total squared perceived error instead of using heuristic approaches. The search for the optimal solution involves two operations: (1) toggle and (2) swap. Both operations try to find the binary states for each pixel to minimize the total squared perceived error. This error energy minimization leads to a conjecture that the absolute value of the filtered error after DBS converges is bounded by half of the peak value of the autocorrelation filter. However, a proof of the bound's existence has not yet been found. In this paper, we present a proof that shows the bound existed as conjectured under the condition that at least one swap occurs after toggle converges. The theoretical analysis also indicates that a swap with a pixel further away from the center of the autocorrelation filter results in a tighter bound. Therefore, we propose a new DBS algorithm which considers toggle and swap separately, and the swap operations are considered in the order from the edge to the center of the filter. Experimental results show that the new algorithm is more efficient than the previous algorithm and can produce half-toned images of the same quality as the previous algorithm.
Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Jian; Hamidouche, Khaled; Zheng, Jie
2015-08-05
Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemicmore » evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.« less
NASA Astrophysics Data System (ADS)
Chakraborty, Tamal; Saha Misra, Iti
2016-03-01
Secondary Users (SUs) in a Cognitive Radio Network (CRN) face unpredictable interruptions in transmission due to the random arrival of Primary Users (PUs), leading to spectrum handoff or dropping instances. An efficient spectrum handoff algorithm, thus, becomes one of the indispensable components in CRN, especially for real-time communication like Voice over IP (VoIP). In this regard, this paper investigates the effects of spectrum handoff on the Quality of Service (QoS) for VoIP traffic in CRN, and proposes a real-time spectrum handoff algorithm in two phases. The first phase (VAST-VoIP based Adaptive Sensing and Transmission) adaptively varies the channel sensing and transmission durations to perform intelligent dropping decisions. The second phase (ProReact-Proactive and Reactive Handoff) deploys efficient channel selection mechanisms during spectrum handoff for resuming communication. Extensive performance analysis in analytical and simulation models confirms a decrease in spectrum handoff delay for VoIP SUs by more than 40% and 60%, compared to existing proactive and reactive algorithms, respectively and ensures a minimum 10% reduction in call-dropping probability with respect to the previous works in this domain. The effective SU transmission duration is also maximized under the proposed algorithm, thereby making it suitable for successful VoIP communication.
A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network
Chen, Yuzhong; Weng, Shining; Guo, Wenzhong; Xiong, Naixue
2016-01-01
Vehicular ad hoc networks (VANETs) have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency. PMID:26907272
A Game Theory Algorithm for Intra-Cluster Data Aggregation in a Vehicular Ad Hoc Network.
Chen, Yuzhong; Weng, Shining; Guo, Wenzhong; Xiong, Naixue
2016-02-19
Vehicular ad hoc networks (VANETs) have an important role in urban management and planning. The effective integration of vehicle information in VANETs is critical to traffic analysis, large-scale vehicle route planning and intelligent transportation scheduling. However, given the limitations in the precision of the output information of a single sensor and the difficulty of information sharing among various sensors in a highly dynamic VANET, effectively performing data aggregation in VANETs remains a challenge. Moreover, current studies have mainly focused on data aggregation in large-scale environments but have rarely discussed the issue of intra-cluster data aggregation in VANETs. In this study, we propose a multi-player game theory algorithm for intra-cluster data aggregation in VANETs by analyzing the competitive and cooperative relationships among sensor nodes. Several sensor-centric metrics are proposed to measure the data redundancy and stability of a cluster. We then study the utility function to achieve efficient intra-cluster data aggregation by considering both data redundancy and cluster stability. In particular, we prove the existence of a unique Nash equilibrium in the game model, and conduct extensive experiments to validate the proposed algorithm. Results demonstrate that the proposed algorithm has advantages over typical data aggregation algorithms in both accuracy and efficiency.
Energy-efficient routing, modulation and spectrum allocation in elastic optical networks
NASA Astrophysics Data System (ADS)
Tan, Yanxia; Gu, Rentao; Ji, Yuefeng
2017-07-01
With tremendous growth in bandwidth demand, energy consumption problem in elastic optical networks (EONs) becomes a hot topic with wide concern. The sliceable bandwidth-variable transponder in EON, which can transmit/receive multiple optical flows, was recently proposed to improve a transponder's flexibility and save energy. In this paper, energy-efficient routing, modulation and spectrum allocation (EE-RMSA) in EONs with sliceable bandwidth-variable transponder is studied. To decrease the energy consumption, we develop a Mixed Integer Linear Programming (MILP) model with corresponding EE-RMSA algorithm for EONs. The MILP model jointly considers the modulation format and optical grooming in the process of routing and spectrum allocation with the objective of minimizing the energy consumption. With the help of genetic operators, the EE-RMSA algorithm iteratively optimizes the feasible routing path, modulation format and spectrum resources solutions by explore the whole search space. In order to save energy, the optical-layer grooming strategy is designed to transmit the lightpath requests. Finally, simulation results verify that the proposed scheme is able to reduce the energy consumption of the network while maintaining the blocking probability (BP) performance compare with the existing First-Fit-KSP algorithm, Iterative Flipping algorithm and EAMGSP algorithm especially in large network topology. Our results also demonstrate that the proposed EE-RMSA algorithm achieves almost the same performance as MILP on an 8-node network.
NASA Astrophysics Data System (ADS)
Kreinovich, Vladik; Longpre, Luc; Starks, Scott A.; Xiang, Gang; Beck, Jan; Kandathi, Raj; Nayak, Asis; Ferson, Scott; Hajagos, Janos
2007-02-01
In many areas of science and engineering, it is desirable to estimate statistical characteristics (mean, variance, covariance, etc.) under interval uncertainty. For example, we may want to use the measured values x(t) of a pollution level in a lake at different moments of time to estimate the average pollution level; however, we do not know the exact values x(t)--e.g., if one of the measurement results is 0, this simply means that the actual (unknown) value of x(t) can be anywhere between 0 and the detection limit (DL). We must, therefore, modify the existing statistical algorithms to process such interval data. Such a modification is also necessary to process data from statistical databases, where, in order to maintain privacy, we only keep interval ranges instead of the actual numeric data (e.g., a salary range instead of the actual salary). Most resulting computational problems are NP-hard--which means, crudely speaking, that in general, no computationally efficient algorithm can solve all particular cases of the corresponding problem. In this paper, we overview practical situations in which computationally efficient algorithms exist: e.g., situations when measurements are very accurate, or when all the measurements are done with one (or few) instruments. As a case study, we consider a practical problem from bioinformatics: to discover the genetic difference between the cancer cells and the healthy cells, we must process the measurements results and find the concentrations c and h of a given gene in cancer and in healthy cells. This is a particular case of a general situation in which, to estimate states or parameters which are not directly accessible by measurements, we must solve a system of equations in which coefficients are only known with interval uncertainty. We show that in general, this problem is NP-hard, and we describe new efficient algorithms for solving this problem in practically important situations.
NASA Astrophysics Data System (ADS)
Sun, Xiuqiao; Wang, Jian
2018-07-01
Freeway service patrol (FSP), is considered to be an effective method for incident management and can help transportation agency decision-makers alter existing route coverage and fleet allocation. This paper investigates the FSP problem of patrol routing design and fleet allocation, with the objective of minimizing the overall average incident response time. While the simulated annealing (SA) algorithm and its improvements have been applied to solve this problem, they often become trapped in local optimal solution. Moreover, the issue of searching efficiency remains to be further addressed. In this paper, we employ the genetic algorithm (GA) and SA to solve the FSP problem. To maintain population diversity and avoid premature convergence, niche strategy is incorporated into the traditional genetic algorithm. We also employ elitist strategy to speed up the convergence. Numerical experiments have been conducted with the help of the Sioux Falls network. Results show that the GA slightly outperforms the dual-based greedy (DBG) algorithm, the very large-scale neighborhood searching (VLNS) algorithm, the SA algorithm and the scenario algorithm.
Real time target allocation in cooperative unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Kudleppanavar, Ganesh
The prolific development of Unmanned Aerial Vehicles (UAV's) in recent years has the potential to provide tremendous advantages in military, commercial and law enforcement applications. While safety and performance take precedence in the development lifecycle, autonomous operations and, in particular, cooperative missions have the ability to significantly enhance the usability of these vehicles. The success of cooperative missions relies on the optimal allocation of targets while taking into consideration the resource limitation of each vehicle. The task allocation process can be centralized or decentralized. This effort presents the development of a real time target allocation algorithm that considers available stored energy in each vehicle while minimizing the communication between each UAV. The algorithm utilizes a nearest neighbor search algorithm to locate new targets with respect to existing targets. Simulations show that this novel algorithm compares favorably to the mixed integer linear programming method, which is computationally more expensive. The implementation of this algorithm on Arduino and Xbee wireless modules shows the capability of the algorithm to execute efficiently on hardware with minimum computation complexity.
Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.
Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong
2017-09-01
An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.
An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.
Chen, Lei; Wei, Guoyuan; Shen, Zhenyao
2015-10-21
To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.
NASA Astrophysics Data System (ADS)
Mahalakshmi; Murugesan, R.
2018-04-01
This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.
NASA Astrophysics Data System (ADS)
Singh, Puja; Prakash, Shashi
2017-07-01
Hybrid wireless-optical broadband access network (WOBAN) or Fiber-Wireless (FiWi) is the integration of wireless access network and optical network. This hybrid multi-domain network adopts the advantages of wireless and optical domains and serves the demand of technology savvy users. FiWi exhibits the properties of cost effectiveness, robustness, flexibility, high capacity, reliability and is self organized. Optical Network Unit (ONU) placement problem in FiWi contributes in simplifying the network design and enhances the performance in terms of cost efficiency and increased throughput. Several individual-based algorithms, such as Simulated Annealing (SA), Tabu Search, etc. have been suggested for ONU placement, but these algorithms suffer from premature convergence (trapping in a local optima). The present research work undertakes the deployment of FiWi and proposes a novel nature-inspired heuristic paradigm called Moth-Flame optimization (MFO) algorithm for multiple optical network units' placement. MFO is a population based algorithm. Population-based algorithms are better in handling local optima avoidance. The simulation results are compared with the existing Greedy and Simulated Annealing algorithms to optimize the position of ONUs. To the best of our knowledge, MFO algorithm has been used for the first time in this domain, moreover it has been able to provide very promising and competitive results. The performance of MFO algorithm has been analyzed by varying the 'b' parameter. MFO algorithm results in faster convergence than the existing strategies of Greedy and SA and returns a lower value of overall cost function. The results exhibit the dependence of the objective function on the distribution of wireless users also.
RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.
Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F
2016-11-01
Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.
Near real-time, on-the-move software PED using VPEF
NASA Astrophysics Data System (ADS)
Green, Kevin; Geyer, Chris; Burnette, Chris; Agarwal, Sanjeev; Swett, Bruce; Phan, Chung; Deterline, Diane
2015-05-01
The scope of the Micro-Cloud for Operational, Vehicle-Based EO-IR Reconnaissance System (MOVERS) development effort, managed by the Night Vision and Electronic Sensors Directorate (NVESD), is to develop, integrate, and demonstrate new sensor technologies and algorithms that improve improvised device/mine detection using efficient and effective exploitation and fusion of sensor data and target cues from existing and future Route Clearance Package (RCP) sensor systems. Unfortunately, the majority of forward looking Full Motion Video (FMV) and computer vision processing, exploitation, and dissemination (PED) algorithms are often developed using proprietary, incompatible software. This makes the insertion of new algorithms difficult due to the lack of standardized processing chains. In order to overcome these limitations, EOIR developed the Government off-the-shelf (GOTS) Video Processing and Exploitation Framework (VPEF) to be able to provide standardized interfaces (e.g., input/output video formats, sensor metadata, and detected objects) for exploitation software and to rapidly integrate and test computer vision algorithms. EOIR developed a vehicle-based computing framework within the MOVERS and integrated it with VPEF. VPEF was further enhanced for automated processing, detection, and publishing of detections in near real-time, thus improving the efficiency and effectiveness of RCP sensor systems.
Protein Sequence Classification with Improved Extreme Learning Machine Algorithms
2014-01-01
Precisely classifying a protein sequence from a large biological protein sequences database plays an important role for developing competitive pharmacological products. Comparing the unseen sequence with all the identified protein sequences and returning the category index with the highest similarity scored protein, conventional methods are usually time-consuming. Therefore, it is urgent and necessary to build an efficient protein sequence classification system. In this paper, we study the performance of protein sequence classification using SLFNs. The recent efficient extreme learning machine (ELM) and its invariants are utilized as the training algorithms. The optimal pruned ELM is first employed for protein sequence classification in this paper. To further enhance the performance, the ensemble based SLFNs structure is constructed where multiple SLFNs with the same number of hidden nodes and the same activation function are used as ensembles. For each ensemble, the same training algorithm is adopted. The final category index is derived using the majority voting method. Two approaches, namely, the basic ELM and the OP-ELM, are adopted for the ensemble based SLFNs. The performance is analyzed and compared with several existing methods using datasets obtained from the Protein Information Resource center. The experimental results show the priority of the proposed algorithms. PMID:24795876
NASA Astrophysics Data System (ADS)
Zhang, Jiangjiang; Lin, Guang; Li, Weixuan; Wu, Laosheng; Zeng, Lingzao
2018-03-01
Ensemble smoother (ES) has been widely used in inverse modeling of hydrologic systems. However, for problems where the distribution of model parameters is multimodal, using ES directly would be problematic. One popular solution is to use a clustering algorithm to identify each mode and update the clusters with ES separately. However, this strategy may not be very efficient when the dimension of parameter space is high or the number of modes is large. Alternatively, we propose in this paper a very simple and efficient algorithm, i.e., the iterative local updating ensemble smoother (ILUES), to explore multimodal distributions of model parameters in nonlinear hydrologic systems. The ILUES algorithm works by updating local ensembles of each sample with ES to explore possible multimodal distributions. To achieve satisfactory data matches in nonlinear problems, we adopt an iterative form of ES to assimilate the measurements multiple times. Numerical cases involving nonlinearity and multimodality are tested to illustrate the performance of the proposed method. It is shown that overall the ILUES algorithm can well quantify the parametric uncertainties of complex hydrologic models, no matter whether the multimodal distribution exists.
Robust and efficient fiducial tracking for augmented reality in HD-laparoscopic video streams
NASA Astrophysics Data System (ADS)
Mueller, M.; Groch, A.; Baumhauer, M.; Maier-Hein, L.; Teber, D.; Rassweiler, J.; Meinzer, H.-P.; Wegner, In.
2012-02-01
Augmented Reality (AR) is a convenient way of porting information from medical images into the surgical field of view and can deliver valuable assistance to the surgeon, especially in laparoscopic procedures. In addition, high definition (HD) laparoscopic video devices are a great improvement over the previously used low resolution equipment. However, in AR applications that rely on real-time detection of fiducials from video streams, the demand for efficient image processing has increased due to the introduction of HD devices. We present an algorithm based on the well-known Conditional Density Propagation (CONDENSATION) algorithm which can satisfy these new demands. By incorporating a prediction around an already existing and robust segmentation algorithm, we can speed up the whole procedure while leaving the robustness of the fiducial segmentation untouched. For evaluation purposes we tested the algorithm on recordings from real interventions, allowing for a meaningful interpretation of the results. Our results show that we can accelerate the segmentation by a factor of 3.5 on average. Moreover, the prediction information can be used to compensate for fiducials that are temporarily occluded or out of scope, providing greater stability.
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brabec, Jiri; Lin, Lin; Shao, Meiyue
We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less
DNABIT Compress – Genome compression algorithm
Rajarajeswari, Pothuraju; Apparao, Allam
2011-01-01
Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
Brabec, Jiri; Lin, Lin; Shao, Meiyue; ...
2015-10-06
We present a special symmetric Lanczos algorithm and a kernel polynomial method (KPM) for approximating the absorption spectrum of molecules within the linear response time-dependent density functional theory (TDDFT) framework in the product form. In contrast to existing algorithms, the new algorithms are based on reformulating the original non-Hermitian eigenvalue problem as a product eigenvalue problem and the observation that the product eigenvalue problem is self-adjoint with respect to an appropriately chosen inner product. This allows a simple symmetric Lanczos algorithm to be used to compute the desired absorption spectrum. The use of a symmetric Lanczos algorithm only requires halfmore » of the memory compared with the nonsymmetric variant of the Lanczos algorithm. The symmetric Lanczos algorithm is also numerically more stable than the nonsymmetric version. The KPM algorithm is also presented as a low-memory alternative to the Lanczos approach, but the algorithm may require more matrix-vector multiplications in practice. We discuss the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost. Applications to a set of small and medium-sized molecules are also presented.« less
Scalable Parallel Density-based Clustering and Applications
NASA Astrophysics Data System (ADS)
Patwary, Mostofa Ali
2014-04-01
Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.
NASA Astrophysics Data System (ADS)
Miorelli, Roberto; Reboud, Christophe
2018-04-01
Pulsed Eddy Current Testing (PECT) is a popular NonDestructive Testing (NDT) technique for some applications like corrosion monitoring in the oil and gas industry, or rivet inspection in the aeronautic area. Its particularity is to use a transient excitation, which allows to retrieve more information from the piece than conventional harmonic ECT, in a simpler and cheaper way than multi-frequency ECT setups. Efficient modeling tools prove, as usual, very useful to optimize experimental sensors and devices or evaluate their performance, for instance. This paper proposes an efficient simulation of PECT signals based on standard time harmonic solvers and use of an Adaptive Sparse Grid (ASG) algorithm. An adaptive sampling of the ECT signal spectrum is performed with this algorithm, then the complete spectrum is interpolated from this sparse representation and PECT signals are finally synthesized by means of inverse Fourier transform. Simulation results corresponding to existing industrial configurations are presented and the performance of the strategy is discussed by comparison to reference results.
NASA Astrophysics Data System (ADS)
Lv, ZhuoKai; Yang, Tiejun; Zhu, Chunhua
2018-03-01
Through utilizing the technology of compressive sensing (CS), the channel estimation methods can achieve the purpose of reducing pilots and improving spectrum efficiency. The channel estimation and pilot design scheme are explored during the correspondence under the help of block-structured CS in massive MIMO systems. The block coherence property of the aggregate system matrix can be minimized so that the pilot design scheme based on stochastic search is proposed. Moreover, the block sparsity adaptive matching pursuit (BSAMP) algorithm under the common sparsity model is proposed so that the channel estimation can be caught precisely. Simulation results are to be proved the proposed design algorithm with superimposed pilots design and the BSAMP algorithm can provide better channel estimation than existing methods.
a Gross Error Elimination Method for Point Cloud Data Based on Kd-Tree
NASA Astrophysics Data System (ADS)
Kang, Q.; Huang, G.; Yang, S.
2018-04-01
Point cloud data has been one type of widely used data sources in the field of remote sensing. Key steps of point cloud data's pro-processing focus on gross error elimination and quality control. Owing to the volume feature of point could data, existed gross error elimination methods need spend massive memory both in space and time. This paper employed a new method which based on Kd-tree algorithm to construct, k-nearest neighbor algorithm to search, settled appropriate threshold to determine with result turns out a judgement that whether target point is or not an outlier. Experimental results show that, our proposed algorithm will help to delete gross error in point cloud data and facilitate to decrease memory consumption, improve efficiency.
The minimal residual QR-factorization algorithm for reliably solving subset regression problems
NASA Technical Reports Server (NTRS)
Verhaegen, M. H.
1987-01-01
A new algorithm to solve test subset regression problems is described, called the minimal residual QR factorization algorithm (MRQR). This scheme performs a QR factorization with a new column pivoting strategy. Basically, this strategy is based on the change in the residual of the least squares problem. Furthermore, it is demonstrated that this basic scheme might be extended in a numerically efficient way to combine the advantages of existing numerical procedures, such as the singular value decomposition, with those of more classical statistical procedures, such as stepwise regression. This extension is presented as an advisory expert system that guides the user in solving the subset regression problem. The advantages of the new procedure are highlighted by a numerical example.
Towards Online Multiresolution Community Detection in Large-Scale Networks
Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim
2011-01-01
The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325
Anatomical medial surfaces with efficient resolution of branches singularities.
Gil, Debora; Vera, Sergio; Borràs, Agnés; Andaluz, Albert; González Ballester, Miguel A
2017-01-01
Medial surfaces are powerful tools for shape description, but their use has been limited due to the sensibility of existing methods to branching artifacts. Medial branching artifacts are associated to perturbations of the object boundary rather than to geometric features. Such instability is a main obstacle for a confident application in shape recognition and description. Medial branches correspond to singularities of the medial surface and, thus, they are problematic for existing morphological and energy-based algorithms. In this paper, we use algebraic geometry concepts in an energy-based approach to compute a medial surface presenting a stable branching topology. We also present an efficient GPU-CPU implementation using standard image processing tools. We show the method computational efficiency and quality on a custom made synthetic database. Finally, we present some results on a medical imaging application for localization of abdominal pathologies. Copyright © 2016 Elsevier B.V. All rights reserved.
Research on parallel algorithm for sequential pattern mining
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao
2008-03-01
Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.
Multiprocessing on supercomputers for computational aerodynamics
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Mehta, Unmeel B.
1991-01-01
Little use is made of multiple processors available on current supercomputers (computers with a theoretical peak performance capability equal to 100 MFLOPS or more) to improve turnaround time in computational aerodynamics. The productivity of a computer user is directly related to this turnaround time. In a time-sharing environment, such improvement in this speed is achieved when multiple processors are used efficiently to execute an algorithm. The concept of multiple instructions and multiple data (MIMD) is applied through multitasking via a strategy that requires relatively minor modifications to an existing code for a single processor. This approach maps the available memory to multiple processors, exploiting the C-Fortran-Unix interface. The existing code is mapped without the need for developing a new algorithm. The procedure for building a code utilizing this approach is automated with the Unix stream editor.
Time and Space Efficient Algorithms for Two-Party Authenticated Data Structures
NASA Astrophysics Data System (ADS)
Papamanthou, Charalampos; Tamassia, Roberto
Authentication is increasingly relevant to data management. Data is being outsourced to untrusted servers and clients want to securely update and query their data. For example, in database outsourcing, a client's database is stored and maintained by an untrusted server. Also, in simple storage systems, clients can store very large amounts of data but at the same time, they want to assure their integrity when they retrieve them. In this paper, we present a model and protocol for two-party authentication of data structures. Namely, a client outsources its data structure and verifies that the answers to the queries have not been tampered with. We provide efficient algorithms to securely outsource a skip list with logarithmic time overhead at the server and client and logarithmic communication cost, thus providing an efficient authentication primitive for outsourced data, both structured (e.g., relational databases) and semi-structured (e.g., XML documents). In our technique, the client stores only a constant amount of space, which is optimal. Our two-party authentication framework can be deployed on top of existing storage applications, thus providing an efficient authentication service. Finally, we present experimental results that demonstrate the practical efficiency and scalability of our scheme.
Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm
Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang
2012-01-01
Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565
Efficient quantum circuits for dense circulant and circulant like operators
Zhou, S. S.
2017-01-01
Circulant matrices are an important family of operators, which have a wide range of applications in science and engineering-related fields. They are, in general, non-sparse and non-unitary. In this paper, we present efficient quantum circuits to implement circulant operators using fewer resources and with lower complexity than existing methods. Moreover, our quantum circuits can be readily extended to the implementation of Toeplitz, Hankel and block circulant matrices. Efficient quantum algorithms to implement the inverses and products of circulant operators are also provided, and an example application in solving the equation of motion for cyclic systems is discussed. PMID:28572988
On recursive least-squares filtering algorithms and implementations. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Hsieh, Shih-Fu
1990-01-01
In many real-time signal processing applications, fast and numerically stable algorithms for solving least-squares problems are necessary and important. In particular, under non-stationary conditions, these algorithms must be able to adapt themselves to reflect the changes in the system and take appropriate adjustments to achieve optimum performances. Among existing algorithms, the QR-decomposition (QRD)-based recursive least-squares (RLS) methods have been shown to be useful and effective for adaptive signal processing. In order to increase the speed of processing and achieve high throughput rate, many algorithms are being vectorized and/or pipelined to facilitate high degrees of parallelism. A time-recursive formulation of RLS filtering employing block QRD will be considered first. Several methods, including a new non-continuous windowing scheme based on selectively rejecting contaminated data, were investigated for adaptive processing. Based on systolic triarrays, many other forms of systolic arrays are shown to be capable of implementing different algorithms. Various updating and downdating systolic algorithms and architectures for RLS filtering are examined and compared in details, which include Householder reflector, Gram-Schmidt procedure, and Givens rotation. A unified approach encompassing existing square-root-free algorithms is also proposed. For the sinusoidal spectrum estimation problem, a judicious method of separating the noise from the signal is of great interest. Various truncated QR methods are proposed for this purpose and compared to the truncated SVD method. Computer simulations provided for detailed comparisons show the effectiveness of these methods. This thesis deals with fundamental issues of numerical stability, computational efficiency, adaptivity, and VLSI implementation for the RLS filtering problems. In all, various new and modified algorithms and architectures are proposed and analyzed; the significance of any of the new method depends crucially on specific application.
Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy
NASA Astrophysics Data System (ADS)
Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo
2014-01-01
In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.
Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.
Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo
2014-01-01
In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.
Peng, Hui; Zheng, Yi; Blumenstein, Michael; Tao, Dacheng; Li, Jinyan
2018-04-16
CRISPR/Cas9 system is a widely used genome editing tool. A prediction problem of great interests for this system is: how to select optimal single guide RNAs (sgRNAs) such that its cleavage efficiency is high meanwhile the off-target effect is low. This work proposed a two-step averaging method (TSAM) for the regression of cleavage efficiencies of a set of sgRNAs by averaging the predicted efficiency scores of a boosting algorithm and those by a support vector machine (SVM).We also proposed to use profiled Markov properties as novel features to capture the global characteristics of sgRNAs. These new features are combined with the outstanding features ranked by the boosting algorithm for the training of the SVM regressor. TSAM improved the mean Spearman correlation coefficiencies comparing with the state-of-the-art performance on benchmark datasets containing thousands of human, mouse and zebrafish sgRNAs. Our method can be also converted to make binary distinctions between efficient and inefficient sgRNAs with superior performance to the existing methods. The analysis reveals that highly efficient sgRNAs have lower melting temperature at the middle of the spacer, cut at 5'-end closer parts of the genome and contain more 'A' but less 'G' comparing with inefficient ones. Comprehensive further analysis also demonstrates that our tool can predict an sgRNA's cutting efficiency with consistently good performance no matter it is expressed from an U6 promoter in cells or from a T7 promoter in vitro. Online tool is available at http://www.aai-bioinfo.com/CRISPR/. Python and Matlab source codes are freely available at https://github.com/penn-hui/TSAM. Jinyan.Li@uts.edu.au. Supplementary data are available at Bioinformatics online.
Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks
Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo
2012-01-01
Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190
Label-based routing for a family of small-world Farey graphs.
Zhai, Yinhu; Wang, Yinhe
2016-05-11
We introduce an informative labelling method for vertices in a family of Farey graphs, and deduce a routing algorithm on all the shortest paths between any two vertices in Farey graphs. The label of a vertex is composed of the precise locating position in graphs and the exact time linking to graphs. All the shortest paths routing between any pair of vertices, which number is exactly the product of two Fibonacci numbers, are determined only by their labels, and the time complexity of the algorithm is O(n). It is the first algorithm to figure out all the shortest paths between any pair of vertices in a kind of deterministic graphs. For Farey networks, the existence of an efficient routing protocol is of interest to design practical communication algorithms in relation to dynamical processes (including synchronization and structural controllability) and also to understand the underlying mechanisms that have shaped their particular structure.
Improving Electronic Sensor Reliability by Robust Outlier Screening
Moreno-Lizaranzu, Manuel J.; Cuesta, Federico
2013-01-01
Electronic sensors are widely used in different application areas, and in some of them, such as automotive or medical equipment, they must perform with an extremely low defect rate. Increasing reliability is paramount. Outlier detection algorithms are a key component in screening latent defects and decreasing the number of customer quality incidents (CQIs). This paper focuses on new spatial algorithms (Good Die in a Bad Cluster with Statistical Bins (GDBC SB) and Bad Bin in a Bad Cluster (BBBC)) and an advanced outlier screening method, called Robust Dynamic Part Averaging Testing (RDPAT), as well as two practical improvements, which significantly enhance existing algorithms. Those methods have been used in production in Freescale® Semiconductor probe factories around the world for several years. Moreover, a study was conducted with production data of 289,080 dice with 26 CQIs to determine and compare the efficiency and effectiveness of all these algorithms in identifying CQIs. PMID:24113682
Improving electronic sensor reliability by robust outlier screening.
Moreno-Lizaranzu, Manuel J; Cuesta, Federico
2013-10-09
Electronic sensors are widely used in different application areas, and in some of them, such as automotive or medical equipment, they must perform with an extremely low defect rate. Increasing reliability is paramount. Outlier detection algorithms are a key component in screening latent defects and decreasing the number of customer quality incidents (CQIs). This paper focuses on new spatial algorithms (Good Die in a Bad Cluster with Statistical Bins (GDBC SB) and Bad Bin in a Bad Cluster (BBBC)) and an advanced outlier screening method, called Robust Dynamic Part Averaging Testing (RDPAT), as well as two practical improvements, which significantly enhance existing algorithms. Those methods have been used in production in Freescale® Semiconductor probe factories around the world for several years. Moreover, a study was conducted with production data of 289,080 dice with 26 CQIs to determine and compare the efficiency and effectiveness of all these algorithms in identifying CQIs.
Penalty dynamic programming algorithm for dim targets detection in sensor systems.
Huang, Dayu; Xue, Anke; Guo, Yunfei
2012-01-01
In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations.
Dexter, Alex; Race, Alan M; Steven, Rory T; Barnes, Jennifer R; Hulme, Heather; Goodwin, Richard J A; Styles, Iain B; Bunch, Josephine
2017-11-07
Clustering is widely used in MSI to segment anatomical features and differentiate tissue types, but existing approaches are both CPU and memory-intensive, limiting their application to small, single data sets. We propose a new approach that uses a graph-based algorithm with a two-phase sampling method that overcomes this limitation. We demonstrate the algorithm on a range of sample types and show that it can segment anatomical features that are not identified using commonly employed algorithms in MSI, and we validate our results on synthetic MSI data. We show that the algorithm is robust to fluctuations in data quality by successfully clustering data with a designed-in variance using data acquired with varying laser fluence. Finally, we show that this method is capable of generating accurate segmentations of large MSI data sets acquired on the newest generation of MSI instruments and evaluate these results by comparison with histopathology.
Label-based routing for a family of small-world Farey graphs
NASA Astrophysics Data System (ADS)
Zhai, Yinhu; Wang, Yinhe
2016-05-01
We introduce an informative labelling method for vertices in a family of Farey graphs, and deduce a routing algorithm on all the shortest paths between any two vertices in Farey graphs. The label of a vertex is composed of the precise locating position in graphs and the exact time linking to graphs. All the shortest paths routing between any pair of vertices, which number is exactly the product of two Fibonacci numbers, are determined only by their labels, and the time complexity of the algorithm is O(n). It is the first algorithm to figure out all the shortest paths between any pair of vertices in a kind of deterministic graphs. For Farey networks, the existence of an efficient routing protocol is of interest to design practical communication algorithms in relation to dynamical processes (including synchronization and structural controllability) and also to understand the underlying mechanisms that have shaped their particular structure.
Autoregressive statistical pattern recognition algorithms for damage detection in civil structures
NASA Astrophysics Data System (ADS)
Yao, Ruigen; Pakzad, Shamim N.
2012-08-01
Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.
Yanagisawa, Keisuke; Komine, Shunta; Kubota, Rikuto; Ohue, Masahito; Akiyama, Yutaka
2018-06-01
The need to accelerate large-scale protein-ligand docking in virtual screening against a huge compound database led researchers to propose a strategy that entails memorizing the evaluation result of the partial structure of a compound and reusing it to evaluate other compounds. However, the previous method required frequent disk accesses, resulting in insufficient acceleration. Thus, more efficient memory usage can be expected to lead to further acceleration, and optimal memory usage could be achieved by solving the minimum cost flow problem. In this research, we propose a fast algorithm for the minimum cost flow problem utilizing the characteristics of the graph generated for this problem as constraints. The proposed algorithm, which optimized memory usage, was approximately seven times faster compared to existing minimum cost flow algorithms. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
STEME: A Robust, Accurate Motif Finder for Large Data Sets
Reid, John E.; Wernisch, Lorenz
2014-01-01
Motif finding is a difficult problem that has been studied for over 20 years. Some older popular motif finders are not suitable for analysis of the large data sets generated by next-generation sequencing. We recently published an efficient approximation (STEME) to the EM algorithm that is at the core of many motif finders such as MEME. This approximation allows the EM algorithm to be applied to large data sets. In this work we describe several efficient extensions to STEME that are based on the MEME algorithm. Together with the original STEME EM approximation, these extensions make STEME a fully-fledged motif finder with similar properties to MEME. We discuss the difficulty of objectively comparing motif finders. We show that STEME performs comparably to existing prominent discriminative motif finders, DREME and Trawler, on 13 sets of transcription factor binding data in mouse ES cells. We demonstrate the ability of STEME to find long degenerate motifs which these discriminative motif finders do not find. As part of our method, we extend an earlier method due to Nagarajan et al. for the efficient calculation of motif E-values. STEME's source code is available under an open source license and STEME is available via a web interface. PMID:24625410
Multiscale Macromolecular Simulation: Role of Evolving Ensembles
Singharoy, A.; Joshi, H.; Ortoleva, P.J.
2013-01-01
Multiscale analysis provides an algorithm for the efficient simulation of macromolecular assemblies. This algorithm involves the coevolution of a quasiequilibrium probability density of atomic configurations and the Langevin dynamics of spatial coarse-grained variables denoted order parameters (OPs) characterizing nanoscale system features. In practice, implementation of the probability density involves the generation of constant OP ensembles of atomic configurations. Such ensembles are used to construct thermal forces and diffusion factors that mediate the stochastic OP dynamics. Generation of all-atom ensembles at every Langevin timestep is computationally expensive. Here, multiscale computation for macromolecular systems is made more efficient by a method that self-consistently folds in ensembles of all-atom configurations constructed in an earlier step, history, of the Langevin evolution. This procedure accounts for the temporal evolution of these ensembles, accurately providing thermal forces and diffusions. It is shown that efficiency and accuracy of the OP-based simulations is increased via the integration of this historical information. Accuracy improves with the square root of the number of historical timesteps included in the calculation. As a result, CPU usage can be decreased by a factor of 3-8 without loss of accuracy. The algorithm is implemented into our existing force-field based multiscale simulation platform and demonstrated via the structural dynamics of viral capsomers. PMID:22978601
Efficient heuristics for maximum common substructure search.
Englert, Péter; Kovács, Péter
2015-05-26
Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.
Tang, Rongnian; Chen, Xupeng; Li, Chuang
2018-05-01
Near-infrared spectroscopy is an efficient, low-cost technology that has potential as an accurate method in detecting the nitrogen content of natural rubber leaves. Successive projections algorithm (SPA) is a widely used variable selection method for multivariate calibration, which uses projection operations to select a variable subset with minimum multi-collinearity. However, due to the fluctuation of correlation between variables, high collinearity may still exist in non-adjacent variables of subset obtained by basic SPA. Based on analysis to the correlation matrix of the spectra data, this paper proposed a correlation-based SPA (CB-SPA) to apply the successive projections algorithm in regions with consistent correlation. The result shows that CB-SPA can select variable subsets with more valuable variables and less multi-collinearity. Meanwhile, models established by the CB-SPA subset outperform basic SPA subsets in predicting nitrogen content in terms of both cross-validation and external prediction. Moreover, CB-SPA is assured to be more efficient, for the time cost in its selection procedure is one-twelfth that of the basic SPA.
A decentralized training algorithm for Echo State Networks in distributed big data applications.
Scardapane, Simone; Wang, Dianhui; Panella, Massimo
2016-06-01
The current big data deluge requires innovative solutions for performing efficient inference on large, heterogeneous amounts of information. Apart from the known challenges deriving from high volume and velocity, real-world big data applications may impose additional technological constraints, including the need for a fully decentralized training architecture. While several alternatives exist for training feed-forward neural networks in such a distributed setting, less attention has been devoted to the case of decentralized training of recurrent neural networks (RNNs). In this paper, we propose such an algorithm for a class of RNNs known as Echo State Networks. The algorithm is based on the well-known Alternating Direction Method of Multipliers optimization procedure. It is formulated only in terms of local exchanges between neighboring agents, without reliance on a coordinating node. Additionally, it does not require the communication of training patterns, which is a crucial component in realistic big data implementations. Experimental results on large scale artificial datasets show that it compares favorably with a fully centralized implementation, in terms of speed, efficiency and generalization accuracy. Copyright © 2015 Elsevier Ltd. All rights reserved.
IDEAL: Images Across Domains, Experiments, Algorithms and Learning
NASA Astrophysics Data System (ADS)
Ushizima, Daniela M.; Bale, Hrishikesh A.; Bethel, E. Wes; Ercius, Peter; Helms, Brett A.; Krishnan, Harinarayan; Grinberg, Lea T.; Haranczyk, Maciej; Macdowell, Alastair A.; Odziomek, Katarzyna; Parkinson, Dilworth Y.; Perciano, Talita; Ritchie, Robert O.; Yang, Chao
2016-11-01
Research across science domains is increasingly reliant on image-centric data. Software tools are in high demand to uncover relevant, but hidden, information in digital images, such as those coming from faster next generation high-throughput imaging platforms. The challenge is to analyze the data torrent generated by the advanced instruments efficiently, and provide insights such as measurements for decision-making. In this paper, we overview work performed by an interdisciplinary team of computational and materials scientists, aimed at designing software applications and coordinating research efforts connecting (1) emerging algorithms for dealing with large and complex datasets; (2) data analysis methods with emphasis in pattern recognition and machine learning; and (3) advances in evolving computer architectures. Engineering tools around these efforts accelerate the analyses of image-based recordings, improve reusability and reproducibility, scale scientific procedures by reducing time between experiments, increase efficiency, and open opportunities for more users of the imaging facilities. This paper describes our algorithms and software tools, showing results across image scales, demonstrating how our framework plays a role in improving image understanding for quality control of existent materials and discovery of new compounds.
Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi
2017-03-01
Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Adaptive Gaussian mixture models for pre-screening in GPR data
NASA Astrophysics Data System (ADS)
Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.
2011-06-01
Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.
He, Chenlong; Feng, Zuren; Ren, Zhigang
2018-02-03
For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham's Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm.
Distributed Algorithm for Voronoi Partition of Wireless Sensor Networks with a Limited Sensing Range
Feng, Zuren; Ren, Zhigang
2018-01-01
For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham’s Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm. PMID:29401649
Kruskal-Wallis-based computationally efficient feature selection for face recognition.
Ali Khan, Sajid; Hussain, Ayyaz; Basit, Abdul; Akram, Sheeraz
2014-01-01
Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.
Multiprocessor architecture: Synthesis and evaluation
NASA Technical Reports Server (NTRS)
Standley, Hilda M.
1990-01-01
Multiprocessor computed architecture evaluation for structural computations is the focus of the research effort described. Results obtained are expected to lead to more efficient use of existing architectures and to suggest designs for new, application specific, architectures. The brief descriptions given outline a number of related efforts directed toward this purpose. The difficulty is analyzing an existing architecture or in designing a new computer architecture lies in the fact that the performance of a particular architecture, within the context of a given application, is determined by a number of factors. These include, but are not limited to, the efficiency of the computation algorithm, the programming language and support environment, the quality of the program written in the programming language, the multiplicity of the processing elements, the characteristics of the individual processing elements, the interconnection network connecting processors and non-local memories, and the shared memory organization covering the spectrum from no shared memory (all local memory) to one global access memory. These performance determiners may be loosely classified as being software or hardware related. This distinction is not clear or even appropriate in many cases. The effect of the choice of algorithm is ignored by assuming that the algorithm is specified as given. Effort directed toward the removal of the effect of the programming language and program resulted in the design of a high-level parallel programming language. Two characteristics of the fundamental structure of the architecture (memory organization and interconnection network) are examined.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.
Dhar, Amrit; Minin, Vladimir N
2017-05-01
Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
Lepre, Jorge; Rice, J Jeremy; Tu, Yuhai; Stolovitzky, Gustavo
2004-05-01
Despite the growing literature devoted to finding differentially expressed genes in assays probing different tissues types, little attention has been paid to the combinatorial nature of feature selection inherent to large, high-dimensional gene expression datasets. New flexible data analysis approaches capable of searching relevant subgroups of genes and experiments are needed to understand multivariate associations of gene expression patterns with observed phenotypes. We present in detail a deterministic algorithm to discover patterns of multivariate gene associations in gene expression data. The patterns discovered are differential with respect to a control dataset. The algorithm is exhaustive and efficient, reporting all existent patterns that fit a given input parameter set while avoiding enumeration of the entire pattern space. The value of the pattern discovery approach is demonstrated by finding a set of genes that differentiate between two types of lymphoma. Moreover, these genes are found to behave consistently in an independent dataset produced in a different laboratory using different arrays, thus validating the genes selected using our algorithm. We show that the genes deemed significant in terms of their multivariate statistics will be missed using other methods. Our set of pattern discovery algorithms including a user interface is distributed as a package called Genes@Work. This package is freely available to non-commercial users and can be downloaded from our website (http://www.research.ibm.com/FunGen).
Resolution-Adaptive Hybrid MIMO Architectures for Millimeter Wave Communications
NASA Astrophysics Data System (ADS)
Choi, Jinseok; Evans, Brian L.; Gatherer, Alan
2017-12-01
In this paper, we propose a hybrid analog-digital beamforming architecture with resolution-adaptive ADCs for millimeter wave (mmWave) receivers with large antenna arrays. We adopt array response vectors for the analog combiners and derive ADC bit-allocation (BA) solutions in closed form. The BA solutions reveal that the optimal number of ADC bits is logarithmically proportional to the RF chain's signal-to-noise ratio raised to the 1/3 power. Using the solutions, two proposed BA algorithms minimize the mean square quantization error of received analog signals under a total ADC power constraint. Contributions of this paper include 1) ADC bit-allocation algorithms to improve communication performance of a hybrid MIMO receiver, 2) approximation of the capacity with the BA algorithm as a function of channels, and 3) a worst-case analysis of the ergodic rate of the proposed MIMO receiver that quantifies system tradeoffs and serves as the lower bound. Simulation results demonstrate that the BA algorithms outperform a fixed-ADC approach in both spectral and energy efficiency, and validate the capacity and ergodic rate formula. For a power constraint equivalent to that of fixed 4-bit ADCs, the revised BA algorithm makes the quantization error negligible while achieving 22% better energy efficiency. Having negligible quantization error allows existing state-of-the-art digital beamformers to be readily applied to the proposed system.
Fast Katz and Commuters: Efficient Estimation of Social Relatedness in Large Networks
NASA Astrophysics Data System (ADS)
Esfandiar, Pooya; Bonchi, Francesco; Gleich, David F.; Greif, Chen; Lakshmanan, Laks V. S.; On, Byung-Won
Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and a quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.
Multi-task feature selection in microarray data by binary integer programming.
Lan, Liang; Vucetic, Slobodan
2013-12-20
A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.
Algorithms for constructing optimal paths and statistical analysis of passenger traffic
NASA Astrophysics Data System (ADS)
Trofimov, S. P.; Druzhinina, N. G.; Trofimova, O. G.
2018-01-01
Several existing information systems of urban passenger transport (UPT) are considered. Author’s UPT network model is presented. To a passenger a new service is offered that is the best path from one stop to another stop at a specified time. The algorithm and software implementation for finding the optimal path are presented. The algorithm uses the current UPT schedule. The article also describes the algorithm of statistical analysis of trip payments by the electronic E-cards. The algorithm allows obtaining the density of passenger traffic during the day. This density is independent of the network topology and UPT schedules. The resulting density of the traffic flow can solve a number of practical problems. In particular, the forecast for the overflow of passenger transport in the «rush» hours, the quantitative comparison of different topologies transport networks, constructing of the best UPT timetable. The efficiency of the proposed integrated approach is demonstrated by the example of the model town with arbitrary dimensions.
An efficicient data structure for three-dimensional vertex based finite volume method
NASA Astrophysics Data System (ADS)
Akkurt, Semih; Sahin, Mehmet
2017-11-01
A vertex based three-dimensional finite volume algorithm has been developed using an edge based data structure.The mesh data structure of the given algorithm is similar to ones that exist in the literature. However, the data structures are redesigned and simplied in order to fit requirements of the vertex based finite volume method. In order to increase the cache efficiency, the data access patterns for the vertex based finite volume method are investigated and these datas are packed/allocated in a way that they are close to each other in the memory. The present data structure is not limited with tetrahedrons, arbitrary polyhedrons are also supported in the mesh without putting any additional effort. Furthermore, the present data structure also supports adaptive refinement and coarsening. For the implicit and parallel implementation of the FVM algorithm, PETSc and MPI libraries are employed. The performance and accuracy of the present algorithm are tested for the classical benchmark problems by comparing the CPU time for the open source algorithms.
Fast katz and commuters : efficient estimation of social relatedness in large networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
On, Byung-Won; Lakshmanan, Laks V. S.; Greif, Chen
Motivated by social network data mining problems such as link prediction and collaborative filtering, significant research effort has been devoted to computing topological measures including the Katz score and the commute time. Existing approaches typically approximate all pairwise relationships simultaneously. In this paper, we are interested in computing: the score for a single pair of nodes, and the top-k nodes with the best scores from a given source node. For the pairwise problem, we apply an iterative algorithm that computes upper and lower bounds for the measures we seek. This algorithm exploits a relationship between the Lanczos process and amore » quadrature rule. For the top-k problem, we propose an algorithm that only accesses a small portion of the graph and is related to techniques used in personalized PageRank computing. To test the scalability and accuracy of our algorithms we experiment with three real-world networks and find that these algorithms run in milliseconds to seconds without any preprocessing.« less
Low complexity 1D IDCT for 16-bit parallel architectures
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar
2007-09-01
This paper shows that using the Loeffler, Ligtenberg, and Moschytz factorization of 8-point IDCT [2] one-dimensional (1-D) algorithm as a fast approximation of the Discrete Cosine Transform (DCT) and using only 16 bit numbers, it is possible to create in an IEEE 1180-1990 compliant and multiplierless algorithm with low computational complexity. This algorithm as characterized by its structure is efficiently implemented on parallel high performance architectures as well as due to its low complexity is sufficient for wide range of other architectures. Additional constraint on this work was the requirement of compliance with the existing MPEG standards. The hardware implementation complexity and low resources where also part of the design criteria for this algorithm. This implementation is also compliant with the precision requirements described in MPEG IDCT precision specification ISO/IEC 23002-1. Complexity analysis is performed as an extension to the simple measure of shifts and adds for the multiplierless algorithm as additional operations are included in the complexity measure to better describe the actual transform implementation complexity.
Ren, Shanshan; Bertels, Koen; Al-Ars, Zaid
2018-01-01
GATK HaplotypeCaller (HC) is a popular variant caller, which is widely used to identify variants in complex genomes. However, due to its high variants detection accuracy, it suffers from long execution time. In GATK HC, the pair-HMMs forward algorithm accounts for a large percentage of the total execution time. This article proposes to accelerate the pair-HMMs forward algorithm on graphics processing units (GPUs) to improve the performance of GATK HC. This article presents several GPU-based implementations of the pair-HMMs forward algorithm. It also analyzes the performance bottlenecks of the implementations on an NVIDIA Tesla K40 card with various data sets. Based on these results and the characteristics of GATK HC, we are able to identify the GPU-based implementations with the highest performance for the various analyzed data sets. Experimental results show that the GPU-based implementations of the pair-HMMs forward algorithm achieve a speedup of up to 5.47× over existing GPU-based implementations.
Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds
NASA Astrophysics Data System (ADS)
Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni
2012-09-01
Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.
Leveraging disjoint communities for detecting overlapping community structure
NASA Astrophysics Data System (ADS)
Chakraborty, Tanmoy
2015-05-01
Network communities represent mesoscopic structure for understanding the organization of real-world networks, where nodes often belong to multiple communities and form overlapping community structure in the network. Due to non-triviality in finding the exact boundary of such overlapping communities, this problem has become challenging, and therefore huge effort has been devoted to detect overlapping communities from the network. In this paper, we present PVOC (Permanence based Vertex-replication algorithm for Overlapping Community detection), a two-stage framework to detect overlapping community structure. We build on a novel observation that non-overlapping community structure detected by a standard disjoint community detection algorithm from a network has high resemblance with its actual overlapping community structure, except the overlapping part. Based on this observation, we posit that there is perhaps no need of building yet another overlapping community finding algorithm; but one can efficiently manipulate the output of any existing disjoint community finding algorithm to obtain the required overlapping structure. We propose a new post-processing technique that by combining with any existing disjoint community detection algorithm, can suitably process each vertex using a new vertex-based metric, called permanence, and thereby finds out overlapping candidates with their community memberships. Experimental results on both synthetic and large real-world networks show that PVOC significantly outperforms six state-of-the-art overlapping community detection algorithms in terms of high similarity of the output with the ground-truth structure. Thus our framework not only finds meaningful overlapping communities from the network, but also allows us to put an end to the constant effort of building yet another overlapping community detection algorithm.
Yue, Lei; Guan, Zailin; Saif, Ullah; Zhang, Fei; Wang, Hao
2016-01-01
Group scheduling is significant for efficient and cost effective production system. However, there exist setup times between the groups, which require to decrease it by sequencing groups in an efficient way. Current research is focused on a sequence dependent group scheduling problem with an aim to minimize the makespan in addition to minimize the total weighted tardiness simultaneously. In most of the production scheduling problems, the processing time of jobs is assumed as fixed. However, the actual processing time of jobs may be reduced due to "learning effect". The integration of sequence dependent group scheduling problem with learning effects has been rarely considered in literature. Therefore, current research considers a single machine group scheduling problem with sequence dependent setup times and learning effects simultaneously. A novel hybrid Pareto artificial bee colony algorithm (HPABC) with some steps of genetic algorithm is proposed for current problem to get Pareto solutions. Furthermore, five different sizes of test problems (small, small medium, medium, large medium, large) are tested using proposed HPABC. Taguchi method is used to tune the effective parameters of the proposed HPABC for each problem category. The performance of HPABC is compared with three famous multi objective optimization algorithms, improved strength Pareto evolutionary algorithm (SPEA2), non-dominated sorting genetic algorithm II (NSGAII) and particle swarm optimization algorithm (PSO). Results indicate that HPABC outperforms SPEA2, NSGAII and PSO and gives better Pareto optimal solutions in terms of diversity and quality for almost all the instances of the different sizes of problems.
A Computationally Efficient Visual Saliency Algorithm Suitable for an Analog CMOS Implementation.
D'Angelo, Robert; Wood, Richard; Lowry, Nathan; Freifeld, Geremy; Huang, Haiyao; Salthouse, Christopher D; Hollosi, Brent; Muresan, Matthew; Uy, Wes; Tran, Nhut; Chery, Armand; Poppe, Dorothy C; Sonkusale, Sameer
2018-06-27
Computer vision algorithms are often limited in their application by the large amount of data that must be processed. Mammalian vision systems mitigate this high bandwidth requirement by prioritizing certain regions of the visual field with neural circuits that select the most salient regions. This work introduces a novel and computationally efficient visual saliency algorithm for performing this neuromorphic attention-based data reduction. The proposed algorithm has the added advantage that it is compatible with an analog CMOS design while still achieving comparable performance to existing state-of-the-art saliency algorithms. This compatibility allows for direct integration with the analog-to-digital conversion circuitry present in CMOS image sensors. This integration leads to power savings in the converter by quantizing only the salient pixels. Further system-level power savings are gained by reducing the amount of data that must be transmitted and processed in the digital domain. The analog CMOS compatible formulation relies on a pulse width (i.e., time mode) encoding of the pixel data that is compatible with pulse-mode imagers and slope based converters often used in imager designs. This letter begins by discussing this time-mode encoding for implementing neuromorphic architectures. Next, the proposed algorithm is derived. Hardware-oriented optimizations and modifications to this algorithm are proposed and discussed. Next, a metric for quantifying saliency accuracy is proposed, and simulation results of this metric are presented. Finally, an analog synthesis approach for a time-mode architecture is outlined, and postsynthesis transistor-level simulations that demonstrate functionality of an implementation in a modern CMOS process are discussed.
ProMotE: an efficient algorithm for counting independent motifs in uncertain network topologies.
Ren, Yuanfang; Sarkar, Aisharjya; Kahveci, Tamer
2018-06-26
Identifying motifs in biological networks is essential in uncovering key functions served by these networks. Finding non-overlapping motif instances is however a computationally challenging task. The fact that biological interactions are uncertain events further complicates the problem, as it makes the existence of an embedding of a given motif an uncertain event as well. In this paper, we develop a novel method, ProMotE (Probabilistic Motif Embedding), to count non-overlapping embeddings of a given motif in probabilistic networks. We utilize a polynomial model to capture the uncertainty. We develop three strategies to scale our algorithm to large networks. Our experiments demonstrate that our method scales to large networks in practical time with high accuracy where existing methods fail. Moreover, our experiments on cancer and degenerative disease networks show that our method helps in uncovering key functional characteristics of biological networks.
Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.
A network of spiking neurons for computing sparse representations in an energy efficient way
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B.
2013-01-01
Computing sparse redundant representations is an important problem both in applied mathematics and neuroscience. In many applications, this problem must be solved in an energy efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating via low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, such operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We compare the numerical performance of HDA with existing algorithms and show that in the asymptotic regime the representation error of HDA decays with time, t, as 1/t. We show that HDA is stable against time-varying noise, specifically, the representation error decays as 1/t for Gaussian white noise. PMID:22920853
NASA Astrophysics Data System (ADS)
Sriwana, I. K.; Marie, I. A.; Mangala, D.
2017-12-01
Kencana Gemilang, Co. is one electronics industry engaging in the manufacture sector. This company manufactures and assembles household electronic products, such as rice cooker, fan, iron, blender, etc. The company deals with an issue of underachievement of an established production target on MCM products line 1. This study aimed to calculate line efficiencies, delay times, and initial line smoothness indexes. The research was carried out by means of depicting a precedence diagram and gathering time data of each work element followed by examination and calculation of standard time as well as line balancing using methods of Moodie Young and Generics Algorithm. Based on results of calculation, better line balancing than the existing initial conditions, i.e. improvement in the line efficiency by 18.39%, deterioration in balanced delay by 28.39%, and deterioration of a smoothness index by 23.85% was obtained.
A network of spiking neurons for computing sparse representations in an energy-efficient way.
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B
2012-11-01
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Empty tracks optimization based on Z-Map model
NASA Astrophysics Data System (ADS)
Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao
2017-12-01
For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.
NASA Technical Reports Server (NTRS)
Janich, Karl W.
2005-01-01
The At-Least version of the Generalized Minimum Spanning Tree Problem (L-GMST) is a problem in which the optimal solution connects all defined clusters of nodes in a given network at a minimum cost. The L-GMST is NPHard; therefore, metaheuristic algorithms have been used to find reasonable solutions to the problem as opposed to computationally feasible exact algorithms, which many believe do not exist for such a problem. One such metaheuristic uses a swarm-intelligent Ant Colony System (ACS) algorithm, in which agents converge on a solution through the weighing of local heuristics, such as the shortest available path and the number of agents that recently used a given path. However, in a network using a solution derived from the ACS algorithm, some nodes may move around to different clusters and cause small changes in the network makeup. Rerunning the algorithm from the start would be somewhat inefficient due to the significance of the changes, so a genetic algorithm based on the top few solutions found in the ACS algorithm is proposed to quickly and efficiently adapt the network to these small changes.
Unified commutation-pruning technique for efficient computation of composite DFTs
NASA Astrophysics Data System (ADS)
Castro-Palazuelos, David E.; Medina-Melendrez, Modesto Gpe.; Torres-Roman, Deni L.; Shkvarko, Yuriy V.
2015-12-01
An efficient computation of a composite length discrete Fourier transform (DFT), as well as a fast Fourier transform (FFT) of both time and space data sequences in uncertain (non-sparse or sparse) computational scenarios, requires specific processing algorithms. Traditional algorithms typically employ some pruning methods without any commutations, which prevents them from attaining the potential computational efficiency. In this paper, we propose an alternative unified approach with automatic commutations between three computational modalities aimed at efficient computations of the pruned DFTs adapted for variable composite lengths of the non-sparse input-output data. The first modality is an implementation of the direct computation of a composite length DFT, the second one employs the second-order recursive filtering method, and the third one performs the new pruned decomposed transform. The pruned decomposed transform algorithm performs the decimation in time or space (DIT) data acquisition domain and, then, decimation in frequency (DIF). The unified combination of these three algorithms is addressed as the DFTCOMM technique. Based on the treatment of the combinational-type hypotheses testing optimization problem of preferable allocations between all feasible commuting-pruning modalities, we have found the global optimal solution to the pruning problem that always requires a fewer or, at most, the same number of arithmetic operations than other feasible modalities. The DFTCOMM method outperforms the existing competing pruning techniques in the sense of attainable savings in the number of required arithmetic operations. It requires fewer or at most the same number of arithmetic operations for its execution than any other of the competing pruning methods reported in the literature. Finally, we provide the comparison of the DFTCOMM with the recently developed sparse fast Fourier transform (SFFT) algorithmic family. We feature that, in the sensing scenarios with sparse/non-sparse data Fourier spectrum, the DFTCOMM technique manifests robustness against such model uncertainties in the sense of insensitivity for sparsity/non-sparsity restrictions and the variability of the operating parameters.
NASA Astrophysics Data System (ADS)
Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.
2015-12-01
We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.
Kalman filter tracking on parallel architectures
NASA Astrophysics Data System (ADS)
Cerati, G.; Elmer, P.; Krutelyov, S.; Lantz, S.; Lefebvre, M.; McDermott, K.; Riley, D.; Tadel, M.; Wittich, P.; Wurthwein, F.; Yagil, A.
2017-10-01
We report on the progress of our studies towards a Kalman filter track reconstruction algorithm with optimal performance on manycore architectures. The combinatorial structure of these algorithms is not immediately compatible with an efficient SIMD (or SIMT) implementation; the challenge for us is to recast the existing software so it can readily generate hundreds of shared-memory threads that exploit the underlying instruction set of modern processors. We show how the data and associated tasks can be organized in a way that is conducive to both multithreading and vectorization. We demonstrate very good performance on Intel Xeon and Xeon Phi architectures, as well as promising first results on Nvidia GPUs.
A fast ergodic algorithm for generating ensembles of equilateral random polygons
NASA Astrophysics Data System (ADS)
Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.
2009-03-01
Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.
Gog, Simon; Bader, Martin
2008-10-01
The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.
Inversion of 2-D DC resistivity data using rapid optimization and minimal complexity neural network
NASA Astrophysics Data System (ADS)
Singh, U. K.; Tiwari, R. K.; Singh, S. B.
2010-02-01
The backpropagation (BP) artificial neural network (ANN) technique of optimization based on steepest descent algorithm is known to be inept for its poor performance and does not ensure global convergence. Nonlinear and complex DC resistivity data require efficient ANN model and more intensive optimization procedures for better results and interpretations. Improvements in the computational ANN modeling process are described with the goals of enhancing the optimization process and reducing ANN model complexity. Well-established optimization methods, such as Radial basis algorithm (RBA) and Levenberg-Marquardt algorithms (LMA) have frequently been used to deal with complexity and nonlinearity in such complex geophysical records. We examined here the efficiency of trained LMA and RB networks by using 2-D synthetic resistivity data and then finally applied to the actual field vertical electrical resistivity sounding (VES) data collected from the Puga Valley, Jammu and Kashmir, India. The resulting ANN reconstruction resistivity results are compared with the result of existing inversion approaches, which are in good agreement. The depths and resistivity structures obtained by the ANN methods also correlate well with the known drilling results and geologic boundaries. The application of the above ANN algorithms proves to be robust and could be used for fast estimation of resistive structures for other complex earth model also.
Low-complex energy-aware image communication in visual sensor networks
NASA Astrophysics Data System (ADS)
Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran
2013-10-01
A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.
SEMIPARAMETRIC EFFICIENT ESTIMATION FOR SHARED-FRAILTY MODELS WITH DOUBLY-CENSORED CLUSTERED DATA
Wang, Jane-Ling
2018-01-01
In this paper, we investigate frailty models for clustered survival data that are subject to both left- and right-censoring, termed “doubly-censored data”. This model extends current survival literature by broadening the application of frailty models from right-censoring to a more complicated situation with additional left censoring. Our approach is motivated by a recent Hepatitis B study where the sample consists of families. We adopt a likelihood approach that aims at the nonparametric maximum likelihood estimators (NPMLE). A new algorithm is proposed, which not only works well for clustered data but also improve over existing algorithm for independent and doubly-censored data, a special case when the frailty variable is a constant equal to one. This special case is well known to be a computational challenge due to the left censoring feature of the data. The new algorithm not only resolves this challenge but also accommodate the additional frailty variable effectively. Asymptotic properties of the NPMLE are established along with semi-parametric efficiency of the NPMLE for the finite-dimensional parameters. The consistency of Bootstrap estimators for the standard errors of the NPMLE is also discussed. We conducted some simulations to illustrate the numerical performance and robustness of the proposed algorithm, which is also applied to the Hepatitis B data. PMID:29527068
Fast kinematic ray tracing of first- and later-arriving global seismic phases
NASA Astrophysics Data System (ADS)
Bijwaard, Harmen; Spakman, Wim
1999-11-01
We have developed a ray tracing algorithm that traces first- and later-arriving global seismic phases precisely (traveltime errors of the order of 0.1 s), and with great computational efficiency (15 rays s- 1). To achieve this, we have extended and adapted two existing ray tracing techniques: a graph method and a perturbation method. The two resulting algorithms are able to trace (critically) refracted, (multiply) reflected, some diffracted (Pdiff), and (multiply) converted seismic phases in a 3-D spherical geometry, thus including the largest part of seismic phases that are commonly observed on seismograms. We have tested and compared the two methods in 2-D and 3-D Cartesian and spherical models, for which both algorithms have yielded precise paths and traveltimes. These tests indicate that only the perturbation method is computationally efficient enough to perform 3-D ray tracing on global data sets of several million phases. To demonstrate its potential for non-linear tomography, we have applied the ray perturbation algorithm to a data set of 7.6 million P and pP phases used by Bijwaard et al. (1998) for linearized tomography. This showed that the expected heterogeneity within the Earth's mantle leads to significant non-linear effects on traveltimes for 10 per cent of the applied phases.
Unmanned Aerial Vehicles for Alien Plant Species Detection and Monitoring
NASA Astrophysics Data System (ADS)
Dvořák, P.; Müllerová, J.; Bartaloš, T.; Brůna, J.
2015-08-01
Invasive species spread rapidly and their eradication is difficult. New methods enabling fast and efficient monitoring are urgently needed for their successful control. Remote sensing can improve early detection of invading plants and make their management more efficient and less expensive. In an ongoing project in the Czech Republic, we aim at developing innovative methods of mapping invasive plant species (semi-automatic detection algorithms) by using purposely designed unmanned aircraft (UAV). We examine possibilities for detection of two tree and two herb invasive species. Our aim is to establish fast, repeatable and efficient computer-assisted method of timely monitoring, reducing the costs of extensive field campaigns. For finding the best detection algorithm we test various classification approaches (object-, pixel-based and hybrid). Thanks to its flexibility and low cost, UAV enables assessing the effect of phenological stage and spatial resolution, and is most suitable for monitoring the efficiency of eradication efforts. However, several challenges exist in UAV application, such as geometrical and radiometric distortions, high amount of data to be processed and legal constrains for the UAV flight missions over urban areas (often highly invaded). The newly proposed UAV approach shall serve invasive species researchers, management practitioners and policy makers.
On Maximizing the Throughput of Packet Transmission under Energy Constraints.
Wu, Weiwei; Dai, Guangli; Li, Yan; Shan, Feng
2018-06-23
More and more Internet of Things (IoT) wireless devices have been providing ubiquitous services over the recent years. Since most of these devices are powered by batteries, a fundamental trade-off to be addressed is the depleted energy and the achieved data throughput in wireless data transmission. By exploiting the rate-adaptive capacities of wireless devices, most existing works on energy-efficient data transmission try to design rate-adaptive transmission policies to maximize the amount of transmitted data bits under the energy constraints of devices. Such solutions, however, cannot apply to scenarios where data packets have respective deadlines and only integrally transmitted data packets contribute. Thus, this paper introduces a notion of weighted throughput, which measures how much total value of data packets are successfully and integrally transmitted before their own deadlines. By designing efficient rate-adaptive transmission policies, this paper aims to make the best use of the energy and maximize the weighted throughput. What is more challenging but with practical significance, we consider the fading effect of wireless channels in both offline and online scenarios. In the offline scenario, we develop an optimal algorithm that computes the optimal solution in pseudo-polynomial time, which is the best possible solution as the problem undertaken is NP-hard. In the online scenario, we propose an efficient heuristic algorithm based on optimal properties derived for the optimal offline solution. Simulation results validate the efficiency of the proposed algorithm.
A conjugate gradient method with descent properties under strong Wolfe line search
NASA Astrophysics Data System (ADS)
Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.
2017-09-01
The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.
Computationally efficient algorithm for high sampling-frequency operation of active noise control
NASA Astrophysics Data System (ADS)
Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati
2015-05-01
In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.
ANNIT - An Efficient Inversion Algorithm based on Prediction Principles
NASA Astrophysics Data System (ADS)
Růžek, B.; Kolář, P.
2009-04-01
Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.
Smart Bandwidth Assignation in an Underlay Cellular Network for Internet of Vehicles.
de la Iglesia, Idoia; Hernandez-Jayo, Unai; Osaba, Eneko; Carballedo, Roberto
2017-09-27
The evolution of the IoT (Internet of Things) paradigm applied to new scenarios as VANETs (Vehicular Ad Hoc Networks) has gained momentum in recent years. Both academia and industry have triggered advanced studies in the IoV (Internet of Vehicles), which is understood as an ecosystem where different types of users (vehicles, elements of the infrastructure, pedestrians) are connected. How to efficiently share the available radio resources among the different types of eligible users is one of the important issues to be addressed. This paper briefly analyzes various concepts presented hitherto in the literature and it proposes an enhanced algorithm for ensuring a robust co-existence of the aforementioned system users. Therefore, this paper introduces an underlay RRM (Radio Resource Management) methodology which is capable of (1) improving cellular spectral efficiency while making a minimal impact on cellular communications and (2) ensuring the different QoS (Quality of Service) requirements of ITS (Intelligent Transportation Systems) applications. Simulation results, where we compare the proposed algorithm to the other two RRM, show the promising spectral efficiency performance of the proposed RRM methodology.
Smart Bandwidth Assignation in an Underlay Cellular Network for Internet of Vehicles
de la Iglesia, Idoia; Hernandez-Jayo, Unai
2017-01-01
The evolution of the IoT (Internet of Things) paradigm applied to new scenarios as VANETs (Vehicular Ad Hoc Networks) has gained momentum in recent years. Both academia and industry have triggered advanced studies in the IoV (Internet of Vehicles), which is understood as an ecosystem where different types of users (vehicles, elements of the infrastructure, pedestrians) are connected. How to efficiently share the available radio resources among the different types of eligible users is one of the important issues to be addressed. This paper briefly analyzes various concepts presented hitherto in the literature and it proposes an enhanced algorithm for ensuring a robust co-existence of the aforementioned system users. Therefore, this paper introduces an underlay RRM (Radio Resource Management) methodology which is capable of (1) improving cellular spectral efficiency while making a minimal impact on cellular communications and (2) ensuring the different QoS (Quality of Service) requirements of ITS (Intelligent Transportation Systems) applications. Simulation results, where we compare the proposed algorithm to the other two RRM, show the promising spectral efficiency performance of the proposed RRM methodology. PMID:28953256
Hybrid ARQ Scheme with Autonomous Retransmission for Multicasting in Wireless Sensor Networks.
Jung, Young-Ho; Choi, Jihoon
2017-02-25
A new hybrid automatic repeat request (HARQ) scheme for multicast service for wireless sensor networks is proposed in this study. In the proposed algorithm, the HARQ operation is combined with an autonomous retransmission method that ensure a data packet is transmitted irrespective of whether or not the packet is successfully decoded at the receivers. The optimal number of autonomous retransmissions is determined to ensure maximum spectral efficiency, and a practical method that adjusts the number of autonomous retransmissions for realistic conditions is developed. Simulation results show that the proposed method achieves higher spectral efficiency than existing HARQ techniques.
Zhang, Xuejun; Lei, Jiaxing
2015-01-01
Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA) problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology. PMID:26180840
Algorithm for protecting light-trees in survivable mesh wavelength-division-multiplexing networks
NASA Astrophysics Data System (ADS)
Luo, Hongbin; Li, Lemin; Yu, Hongfang
2006-12-01
Wavelength-division-multiplexing (WDM) technology is expected to facilitate bandwidth-intensive multicast applications such as high-definition television. A single fiber cut in a WDM mesh network, however, can disrupt the dissemination of information to several destinations on a light-tree based multicast session. Thus it is imperative to protect multicast sessions by reserving redundant resources. We propose a novel and efficient algorithm for protecting light-trees in survivable WDM mesh networks. The algorithm is called segment-based protection with sister node first (SSNF), whose basic idea is to protect a light-tree using a set of backup segments with a higher priority to protect the segments from a branch point to its children (sister nodes). The SSNF algorithm differs from the segment protection scheme proposed in the literature in how the segments are identified and protected. Our objective is to minimize the network resources used for protecting each primary light-tree such that the blocking probability can be minimized. To verify the effectiveness of the SSNF algorithm, we conduct extensive simulation experiments. The simulation results demonstrate that the SSNF algorithm outperforms existing algorithms for the same problem.
NASA Astrophysics Data System (ADS)
Tomaro, Robert F.
1998-07-01
The present research is aimed at developing a higher-order, spatially accurate scheme for both steady and unsteady flow simulations using unstructured meshes. The resulting scheme must work on a variety of general problems to ensure the creation of a flexible, reliable and accurate aerodynamic analysis tool. To calculate the flow around complex configurations, unstructured grids and the associated flow solvers have been developed. Efficient simulations require the minimum use of computer memory and computational times. Unstructured flow solvers typically require more computer memory than a structured flow solver due to the indirect addressing of the cells. The approach taken in the present research was to modify an existing three-dimensional unstructured flow solver to first decrease the computational time required for a solution and then to increase the spatial accuracy. The terms required to simulate flow involving non-stationary grids were also implemented. First, an implicit solution algorithm was implemented to replace the existing explicit procedure. Several test cases, including internal and external, inviscid and viscous, two-dimensional, three-dimensional and axi-symmetric problems, were simulated for comparison between the explicit and implicit solution procedures. The increased efficiency and robustness of modified code due to the implicit algorithm was demonstrated. Two unsteady test cases, a plunging airfoil and a wing undergoing bending and torsion, were simulated using the implicit algorithm modified to include the terms required for a moving and/or deforming grid. Secondly, a higher than second-order spatially accurate scheme was developed and implemented into the baseline code. Third- and fourth-order spatially accurate schemes were implemented and tested. The original dissipation was modified to include higher-order terms and modified near shock waves to limit pre- and post-shock oscillations. The unsteady cases were repeated using the higher-order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.
Distribution system model calibration with big data from AMI and PV inverters
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...
2016-03-03
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Distribution system model calibration with big data from AMI and PV inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Gaur, Pallavi; Chaturvedi, Anoop
2017-07-22
The clustering pattern and motifs give immense information about any biological data. An application of machine learning algorithms for clustering and candidate motif detection in miRNAs derived from exosomes is depicted in this paper. Recent progress in the field of exosome research and more particularly regarding exosomal miRNAs has led much bioinformatic-based research to come into existence. The information on clustering pattern and candidate motifs in miRNAs of exosomal origin would help in analyzing existing, as well as newly discovered miRNAs within exosomes. Along with obtaining clustering pattern and candidate motifs in exosomal miRNAs, this work also elaborates the usefulness of the machine learning algorithms that can be efficiently used and executed on various programming languages/platforms. Data were clustered and sequence candidate motifs were detected successfully. The results were compared and validated with some available web tools such as 'BLASTN' and 'MEME suite'. The machine learning algorithms for aforementioned objectives were applied successfully. This work elaborated utility of machine learning algorithms and language platforms to achieve the tasks of clustering and candidate motif detection in exosomal miRNAs. With the information on mentioned objectives, deeper insight would be gained for analyses of newly discovered miRNAs in exosomes which are considered to be circulating biomarkers. In addition, the execution of machine learning algorithms on various language platforms gives more flexibility to users to try multiple iterations according to their requirements. This approach can be applied to other biological data-mining tasks as well.
Active link selection for efficient semi-supervised community detection
NASA Astrophysics Data System (ADS)
Yang, Liang; Jin, Di; Wang, Xiao; Cao, Xiaochun
2015-03-01
Several semi-supervised community detection algorithms have been proposed recently to improve the performance of traditional topology-based methods. However, most of them focus on how to integrate supervised information with topology information; few of them pay attention to which information is critical for performance improvement. This leads to large amounts of demand for supervised information, which is expensive or difficult to obtain in most fields. For this problem we propose an active link selection framework, that is we actively select the most uncertain and informative links for human labeling for the efficient utilization of the supervised information. We also disconnect the most likely inter-community edges to further improve the efficiency. Our main idea is that, by connecting uncertain nodes to their community hubs and disconnecting the inter-community edges, one can sharpen the block structure of adjacency matrix more efficiently than randomly labeling links as the existing methods did. Experiments on both synthetic and real networks demonstrate that our new approach significantly outperforms the existing methods in terms of the efficiency of using supervised information. It needs ~13% of the supervised information to achieve a performance similar to that of the original semi-supervised approaches.
Active link selection for efficient semi-supervised community detection
Yang, Liang; Jin, Di; Wang, Xiao; Cao, Xiaochun
2015-01-01
Several semi-supervised community detection algorithms have been proposed recently to improve the performance of traditional topology-based methods. However, most of them focus on how to integrate supervised information with topology information; few of them pay attention to which information is critical for performance improvement. This leads to large amounts of demand for supervised information, which is expensive or difficult to obtain in most fields. For this problem we propose an active link selection framework, that is we actively select the most uncertain and informative links for human labeling for the efficient utilization of the supervised information. We also disconnect the most likely inter-community edges to further improve the efficiency. Our main idea is that, by connecting uncertain nodes to their community hubs and disconnecting the inter-community edges, one can sharpen the block structure of adjacency matrix more efficiently than randomly labeling links as the existing methods did. Experiments on both synthetic and real networks demonstrate that our new approach significantly outperforms the existing methods in terms of the efficiency of using supervised information. It needs ~13% of the supervised information to achieve a performance similar to that of the original semi-supervised approaches. PMID:25761385
Embedded algorithms within an FPGA-based system to process nonlinear time series data
NASA Astrophysics Data System (ADS)
Jones, Jonathan D.; Pei, Jin-Song; Tull, Monte P.
2008-03-01
This paper presents some preliminary results of an ongoing project. A pattern classification algorithm is being developed and embedded into a Field-Programmable Gate Array (FPGA) and microprocessor-based data processing core in this project. The goal is to enable and optimize the functionality of onboard data processing of nonlinear, nonstationary data for smart wireless sensing in structural health monitoring. Compared with traditional microprocessor-based systems, fast growing FPGA technology offers a more powerful, efficient, and flexible hardware platform including on-site (field-programmable) reconfiguration capability of hardware. An existing nonlinear identification algorithm is used as the baseline in this study. The implementation within a hardware-based system is presented in this paper, detailing the design requirements, validation, tradeoffs, optimization, and challenges in embedding this algorithm. An off-the-shelf high-level abstraction tool along with the Matlab/Simulink environment is utilized to program the FPGA, rather than coding the hardware description language (HDL) manually. The implementation is validated by comparing the simulation results with those from Matlab. In particular, the Hilbert Transform is embedded into the FPGA hardware and applied to the baseline algorithm as the centerpiece in processing nonlinear time histories and extracting instantaneous features of nonstationary dynamic data. The selection of proper numerical methods for the hardware execution of the selected identification algorithm and consideration of the fixed-point representation are elaborated. Other challenges include the issues of the timing in the hardware execution cycle of the design, resource consumption, approximation accuracy, and user flexibility of input data types limited by the simplicity of this preliminary design. Future work includes making an FPGA and microprocessor operate together to embed a further developed algorithm that yields better computational and power efficiency.
A Comparison of Genetic Programming Variants for Hyper-Heuristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, Sean
Modern society is faced with ever more complex problems, many of which can be formulated as generate-and-test optimization problems. General-purpose optimization algorithms are not well suited for real-world scenarios where many instances of the same problem class need to be repeatedly and efficiently solved, such as routing vehicles over highways with constantly changing traffic flows, because they are not targeted to a particular scenario. Hyper-heuristics automate the design of algorithms to create a custom algorithm for a particular scenario. Hyper-heuristics typically employ Genetic Programming (GP) and this project has investigated the relationship between the choice of GP and performance inmore » Hyper-heuristics. Results are presented demonstrating the existence of problems for which there is a statistically significant performance differential between the use of different types of GP.« less
Adjoint-Based Algorithms for Adaptation and Design Optimizations on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.
2006-01-01
Schemes based on discrete adjoint algorithms present several exciting opportunities for significantly advancing the current state of the art in computational fluid dynamics. Such methods provide an extremely efficient means for obtaining discretely consistent sensitivity information for hundreds of design variables, opening the door to rigorous, automated design optimization of complex aerospace configuration using the Navier-Stokes equation. Moreover, the discrete adjoint formulation provides a mathematically rigorous foundation for mesh adaptation and systematic reduction of spatial discretization error. Error estimates are also an inherent by-product of an adjoint-based approach, valuable information that is virtually non-existent in today's large-scale CFD simulations. An overview of the adjoint-based algorithm work at NASA Langley Research Center is presented, with examples demonstrating the potential impact on complex computational problems related to design optimization as well as mesh adaptation.
Path generation algorithm for UML graphic modeling of aerospace test software
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Chen, Chao
2018-03-01
Aerospace traditional software testing engineers are based on their own work experience and communication with software development personnel to complete the description of the test software, manual writing test cases, time-consuming, inefficient, loopholes and more. Using the high reliability MBT tools developed by our company, the one-time modeling can automatically generate test case documents, which is efficient and accurate. UML model to describe the process accurately express the need to rely on the path is reached, the existing path generation algorithm are too simple, cannot be combined into a path and branch path with loop, or too cumbersome, too complicated arrangement generates a path is meaningless, for aerospace software testing is superfluous, I rely on our experience of ten load space, tailor developed a description of aerospace software UML graphics path generation algorithm.
Reproducibility of graph metrics of human brain structural networks.
Duda, Jeffrey T; Cook, Philip A; Gee, James C
2014-01-01
Recent interest in human brain connectivity has led to the application of graph theoretical analysis to human brain structural networks, in particular white matter connectivity inferred from diffusion imaging and fiber tractography. While these methods have been used to study a variety of patient populations, there has been less examination of the reproducibility of these methods. A number of tractography algorithms exist and many of these are known to be sensitive to user-selected parameters. The methods used to derive a connectivity matrix from fiber tractography output may also influence the resulting graph metrics. Here we examine how these algorithm and parameter choices influence the reproducibility of proposed graph metrics on a publicly available test-retest dataset consisting of 21 healthy adults. The dice coefficient is used to examine topological similarity of constant density subgraphs both within and between subjects. Seven graph metrics are examined here: mean clustering coefficient, characteristic path length, largest connected component size, assortativity, global efficiency, local efficiency, and rich club coefficient. The reproducibility of these network summary measures is examined using the intraclass correlation coefficient (ICC). Graph curves are created by treating the graph metrics as functions of a parameter such as graph density. Functional data analysis techniques are used to examine differences in graph measures that result from the choice of fiber tracking algorithm. The graph metrics consistently showed good levels of reproducibility as measured with ICC, with the exception of some instability at low graph density levels. The global and local efficiency measures were the most robust to the choice of fiber tracking algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Woohyun; Katipamula, Srinivas; Lutes, Robert G.
Small- and medium-sized (<100,000 sf) commercial buildings (SMBs) represent over 95% of the U.S. commercial building stock and consume over 60% of total site energy consumption. Many of these buildings use rudimentary controls that are mostly manual, with limited scheduling capability, no monitoring or failure management. Therefore, many of these buildings are operated inefficiently and consume excess energy. SMBs typically utilize packaged rooftop units (RTUs) that are controlled by an individual thermostat. There is increased urgency to improve the operating efficiency of existing commercial building stock in the U.S. for many reasons, chief among them is to mitigate the climatemore » change impacts. Studies have shown that managing set points and schedules of the RTUs will result in up to 20% energy and cost savings. Another problem associated with RTUs is short-cycling, where an RTU goes through ON and OFF cycles too frequently. Excessive cycling can lead to excessive wear and lead to premature failure of the compressor or its components. The short cycling can result in a significantly decreased average efficiency (up to 10%), even if there are no physical failures in the equipment. Also, SMBs use a time-of-day scheduling is to start the RTUs before the building will be occupied and shut it off when unoccupied. Ensuring correct use of the zone set points and eliminating frequent cycling of RTUs thereby leading to persistent building operations can significantly increase the operational efficiency of the SMBs. A growing trend is to use low-cost control infrastructure that can enable scalable and cost-effective intelligent building operations. The work reported in this report describes three algorithms for detecting the zone set point temperature, RTU cycling rate and occupancy schedule detection that can be deployed on the low-cost infrastructure. These algorithms only require the zone temperature data for detection. The algorithms have been tested and validated using field data from a number of RTUs from six buildings in different climate locations. Overall, the algorithms were successful in detecting the set points and ON/OFF cycles accurately using the peak detection technique and occupancy schedule using symbolic aggregate approximation technique. The report describes the three algorithms, results from testing the algorithms using field data, how the algorithms can be used to improve SMBs efficiency, and presents related conclusions.« less
Use of parallel computing in mass processing of laser data
NASA Astrophysics Data System (ADS)
Będkowski, J.; Bratuś, R.; Prochaska, M.; Rzonca, A.
2015-12-01
The first part of the paper includes a description of the rules used to generate the algorithm needed for the purpose of parallel computing and also discusses the origins of the idea of research on the use of graphics processors in large scale processing of laser scanning data. The next part of the paper includes the results of an efficiency assessment performed for an array of different processing options, all of which were substantially accelerated with parallel computing. The processing options were divided into the generation of orthophotos using point clouds, coloring of point clouds, transformations, and the generation of a regular grid, as well as advanced processes such as the detection of planes and edges, point cloud classification, and the analysis of data for the purpose of quality control. Most algorithms had to be formulated from scratch in the context of the requirements of parallel computing. A few of the algorithms were based on existing technology developed by the Dephos Software Company and then adapted to parallel computing in the course of this research study. Processing time was determined for each process employed for a typical quantity of data processed, which helped confirm the high efficiency of the solutions proposed and the applicability of parallel computing to the processing of laser scanning data. The high efficiency of parallel computing yields new opportunities in the creation and organization of processing methods for laser scanning data.
Efficient traffic grooming in SONET/WDM BLSR Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awwal, A S; Billah, A B; Wang, B
2004-04-02
In this paper, we study traffic grooming in SONET/WDM BLSR networks under the uniform all-to-all traffic model with an objective to reduce total network costs (wavelength and electronic multiplexing costs), in particular, to minimize the number of ADMs while using the optimal number of wavelengths. We derive a new tighter lower bound for the number of wavelengths when the number of nodes is a multiple of 4. We show that this lower bound is achievable. All previous ADM lower bounds except perhaps that in were derived under the assumption that the magnitude of the traffic streams (r) is one unitmore » (r = 1) with respect to the wavelength capacity granularity g. We then derive new, more general and tighter lower bounds for the number of ADMs subject to that the optimal number of wavelengths is used, and propose heuristic algorithms (circle construction algorithm and circle grooming algorithm) that try to minimize the number of ADMs while using the optimal number of wavelengths in BLSR networks. Both the bounds and algorithms are applicable to any value of r and for different wavelength granularity g. Performance evaluation shows that wherever applicable, our lower bounds are at least as good as existing bounds and are much tighter than existing ones in many cases. Our proposed heuristic grooming algorithms perform very well with traffic streams of larger magnitude. The resulting number of ADMs required is very close to the corresponding lower bounds derived in this paper.« less
2016-01-01
Motivation: Gene tree represents the evolutionary history of gene lineages that originate from multiple related populations. Under the multispecies coalescent model, lineages may coalesce outside the species (population) boundary. Given a species tree (with branch lengths), the gene tree probability is the probability of observing a specific gene tree topology under the multispecies coalescent model. There are two existing algorithms for computing the exact gene tree probability. The first algorithm is due to Degnan and Salter, where they enumerate all the so-called coalescent histories for the given species tree and the gene tree topology. Their algorithm runs in exponential time in the number of gene lineages in general. The second algorithm is the STELLS algorithm (2012), which is usually faster but also runs in exponential time in almost all the cases. Results: In this article, we present a new algorithm, called CompactCH, for computing the exact gene tree probability. This new algorithm is based on the notion of compact coalescent histories: multiple coalescent histories are represented by a single compact coalescent history. The key advantage of our new algorithm is that it runs in polynomial time in the number of gene lineages if the number of populations is fixed to be a constant. The new algorithm is more efficient than the STELLS algorithm both in theory and in practice when the number of populations is small and there are multiple gene lineages from each population. As an application, we show that CompactCH can be applied in the inference of population tree (i.e. the population divergence history) from population haplotypes. Simulation results show that the CompactCH algorithm enables efficient and accurate inference of population trees with much more haplotypes than a previous approach. Availability: The CompactCH algorithm is implemented in the STELLS software package, which is available for download at http://www.engr.uconn.edu/ywu/STELLS.html. Contact: ywu@engr.uconn.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307621
NASA Astrophysics Data System (ADS)
Asgari, Shadnaz
Recent developments in the integrated circuits and wireless communications not only open up many possibilities but also introduce challenging issues for the collaborative processing of signals for source localization and beamforming in an energy-constrained distributed sensor network. In signal processing, various sensor array processing algorithms and concepts have been adopted, but must be further tailored to match the communication and computational constraints. Sometimes the constraints are such that none of the existing algorithms would be an efficient option for the defined problem and as the result; the necessity of developing a new algorithm becomes undeniable. In this dissertation, we present the theoretical and the practical issues of Direction-Of-Arrival (DOA) estimation and source localization using the Approximate-Maximum-Likelihood (AML) algorithm for different scenarios. We first investigate a robust algorithm design for coherent source DOA estimation in a limited reverberant environment. Then, we provide a least-square (LS) solution for source localization based on our newly proposed virtual array model. In another scenario, we consider the determination of the location of a disturbance source which emits both wideband acoustic and seismic signals. We devise an enhanced AML algorithm to process the data collected at the acoustic sensors. For processing the seismic signals, two distinct algorithms are investigated to determine the DOAs. Then, we consider a basic algorithm for fusion of the results yielded by the acoustic and seismic arrays. We also investigate the theoretical and practical issues of DOA estimation in a three-dimensional (3D) scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. In this dissertation, for each scenario, efficient numerical implementations of the corresponding AML algorithm are derived and applied into a real-time sensor network testbed. Extensive simulations as well as experimental results are presented to verify the effectiveness of the proposed algorithms.
Confidence Sharing: An Economic Strategy for Efficient Information Flows in Animal Groups
Korman, Amos; Greenwald, Efrat; Feinerman, Ofer
2014-01-01
Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication. PMID:25275649
Confidence sharing: an economic strategy for efficient information flows in animal groups.
Korman, Amos; Greenwald, Efrat; Feinerman, Ofer
2014-10-01
Social animals may share information to obtain a more complete and accurate picture of their surroundings. However, physical constraints on communication limit the flow of information between interacting individuals in a way that can cause an accumulation of errors and deteriorated collective behaviors. Here, we theoretically study a general model of information sharing within animal groups. We take an algorithmic perspective to identify efficient communication schemes that are, nevertheless, economic in terms of communication, memory and individual internal computation. We present a simple and natural algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. Confidence is communicated between agents by means of active signaling. We motivate this model by novel and existing empirical evidences for confidence sharing in animal groups. We rigorously show that this algorithm competes extremely well with the best possible algorithm that operates without any computational constraints. We also show that this algorithm is minimal, in the sense that further reduction in communication may significantly reduce performances. Our proofs rely on the Cramér-Rao bound and on our definition of a Fisher Channel Capacity. We use these concepts to quantify information flows within the group which are then used to obtain lower bounds on collective performance. The abstract nature of our model makes it rigorously solvable and its conclusions highly general. Indeed, our results suggest confidence sharing as a central notion in the context of animal communication.
A strategy for recovering continuous behavioral telemetry data from Pacific walruses
Fischbach, Anthony S.; Jay, Chadwick V.
2016-01-01
Tracking animal behavior and movement with telemetry sensors can offer substantial insights required for conservation. Yet, the value of data collected by animal-borne telemetry systems is limited by bandwidth constraints. To understand the response of Pacific walruses (Odobenus rosmarus divergens) to rapid changes in sea ice availability, we required continuous geospatial chronologies of foraging behavior. Satellite telemetry offered the only practical means to systematically collect such data; however, data transmission constraints of satellite data-collection systems limited the data volume that could be acquired. Although algorithms exist for reducing sensor data volumes for efficient transmission, none could meet our requirements. Consequently, we developed an algorithm for classifying hourly foraging behavior status aboard a tag with limited processing power. We found a 98% correspondence of our algorithm's classification with a test classification based on time–depth data recovered and characterized through multivariate analysis in a separate study. We then applied our algorithm within a telemetry system that relied on remotely deployed satellite tags. Data collected by these tags from Pacific walruses across their range during 2007–2015 demonstrated the consistency of foraging behavior collected by this strategy with data collected by data logging tags; and demonstrated the ability to collect geospatial behavioral chronologies with minimal missing data where recovery of data logging tags is precluded. Our strategy for developing a telemetry system may be applicable to any study requiring intelligent algorithms to continuously monitor behavior, and then compress those data into meaningful information that can be efficiently transmitted.
Yang, Liang; Jin, Di; He, Dongxiao; Fu, Huazhu; Cao, Xiaochun; Fogelman-Soulie, Francoise
2017-03-29
Due to the importance of community structure in understanding network and a surge of interest aroused on community detectability, how to improve the community identification performance with pairwise prior information becomes a hot topic. However, most existing semi-supervised community detection algorithms only focus on improving the accuracy but ignore the impacts of priors on speeding detection. Besides, they always require to tune additional parameters and cannot guarantee pairwise constraints. To address these drawbacks, we propose a general, high-speed, effective and parameter-free semi-supervised community detection framework. By constructing the indivisible super-nodes according to the connected subgraph of the must-link constraints and by forming the weighted super-edge based on network topology and cannot-link constraints, our new framework transforms the original network into an equivalent but much smaller Super-Network. Super-Network perfectly ensures the must-link constraints and effectively encodes cannot-link constraints. Furthermore, the time complexity of super-network construction process is linear in the original network size, which makes it efficient. Meanwhile, since the constructed super-network is much smaller than the original one, any existing community detection algorithm is much faster when using our framework. Besides, the overall process will not introduce any additional parameters, making it more practical.
NASA Technical Reports Server (NTRS)
Woodard, Mark; Rohrbaugh, Dave
1995-01-01
The Advanced Composition Explorer (ACE) spacecraft is designed to fly in a spin-stabilized attitude. The spacecraft will carry two attitude sensors - a digital fine Sun sensor and a charge coupled device (CCD) star tracker - to allow ground-based determination of the spacecraft attitude and spin rate. Part of the processing that must be performed on the CCD star tracker data is the star identification. Star data received from the spacecraft must be matched with star information in the SKYMAP catalog to determine exactly which stars the sensor is tracking. This information, along with the Sun vector measured by the Sun sensor, is used to determine the spacecraft attitude. Several existing star identification (star ID) systems were examined to determine whether they could be modified for use on the ACE mission. Star ID systems which exist for three-axis stabilized spacecraft tend to be complex in nature and many require fairly good knowledge of the spacecraft attitude, making their use for ACE excessive. Star ID systems used for spinners carrying traditional slit star sensors would have to be modified to model the CCD star tracker. The ACE star ID algorithm must also be robust, in that it will be able to correctly identify stars even though the attitude is not known to a high degree of accuracy, and must be very efficient to allow real-time star identification. The paper presents the star ID algorithm that was developed for ACE. Results from prototype testing are also presented to demonstrate the efficiency, accuracy, and robustness of the algorithm.
NASA Astrophysics Data System (ADS)
Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai
2016-03-01
Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).
A Lightweight White-Box Symmetric Encryption Algorithm against Node Capture for WSNs †
Shi, Yang; Wei, Wujing; He, Zongjian
2015-01-01
Wireless Sensor Networks (WSNs) are often deployed in hostile environments and, thus, nodes can be potentially captured by an adversary. This is a typical white-box attack context, i.e., the adversary may have total visibility of the implementation of the build-in cryptosystem and full control over its execution platform. Handling white-box attacks in a WSN scenario is a challenging task. Existing encryption algorithms for white-box attack contexts require large memory footprint and, hence, are not applicable for wireless sensor networks scenarios. As a countermeasure against the threat in this context, in this paper, we propose a class of lightweight secure implementations of the symmetric encryption algorithm SMS4. The basic idea of our approach is to merge several steps of the round function of SMS4 into table lookups, blended by randomly generated mixing bijections. Therefore, the size of the implementations are significantly reduced while keeping the same security efficiency. The security and efficiency of the proposed solutions are theoretically analyzed. Evaluation shows our solutions satisfy the requirement of sensor nodes in terms of limited memory size and low computational costs. PMID:26007737
A-VCI: A flexible method to efficiently compute vibrational spectra
NASA Astrophysics Data System (ADS)
Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2017-06-01
The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm-1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm-1 is the most accurate computation that exists today on such systems.
A-VCI: A flexible method to efficiently compute vibrational spectra.
Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2017-06-07
The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm -1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm -1 is the most accurate computation that exists today on such systems.
MULTINEST: an efficient and robust Bayesian inference tool for cosmology and particle physics
NASA Astrophysics Data System (ADS)
Feroz, F.; Hobson, M. P.; Bridges, M.
2009-10-01
We present further development and the first public release of our multimodal nested sampling algorithm, called MULTINEST. This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson, which itself significantly outperformed existing Markov chain Monte Carlo techniques in a wide range of astrophysical inference problems. The accuracy and economy of the MULTINEST algorithm are demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla Λ cold dark matter model to include spatial curvature and a varying equation of state for dark energy. The MULTINEST software, which is fully parallelized using MPI and includes an interface to COSMOMC, is available at http://www.mrao.cam.ac.uk/software/multinest/. It will also be released as part of the SUPERBAYES package, for the analysis of supersymmetric theories of particle physics, at http://www.superbayes.org.
A ripple-spreading genetic algorithm for the aircraft sequencing problem.
Hu, Xiao-Bing; Di Paolo, Ezequiel A
2011-01-01
When genetic algorithms (GAs) are applied to combinatorial problems, permutation representations are usually adopted. As a result, such GAs are often confronted with feasibility and memory-efficiency problems. With the aircraft sequencing problem (ASP) as a study case, this paper reports on a novel binary-representation-based GA scheme for combinatorial problems. Unlike existing GAs for the ASP, which typically use permutation representations based on aircraft landing order, the new GA introduces a novel ripple-spreading model which transforms the original landing-order-based ASP solutions into value-based ones. In the new scheme, arriving aircraft are projected as points into an artificial space. A deterministic method inspired by the natural phenomenon of ripple-spreading on liquid surfaces is developed, which uses a few parameters as input to connect points on this space to form a landing sequence. A traditional GA, free of feasibility and memory-efficiency problems, can then be used to evolve the ripple-spreading related parameters in order to find an optimal sequence. Since the ripple-spreading model is the centerpiece of the new algorithm, it is called the ripple-spreading GA (RSGA). The advantages of the proposed RSGA are illustrated by extensive comparative studies for the case of the ASP.
SemiBoost: boosting for semi-supervised learning.
Mallapragada, Pavan Kumar; Jin, Rong; Jain, Anil K; Liu, Yi
2009-11-01
Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.
Saturation: An efficient iteration strategy for symbolic state-space generation
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Luettgen, Gerald; Siminiceanu, Radu; Bushnell, Dennis M. (Technical Monitor)
2001-01-01
This paper presents a novel algorithm for generating state spaces of asynchronous systems using Multi-valued Decision Diagrams. In contrast to related work, the next-state function of a system is not encoded as a single Boolean function, but as cross-products of integer functions. This permits the application of various iteration strategies to build a system's state space. In particular, this paper introduces a new elegant strategy, called saturation, and implements it in the tool SMART. On top of usually performing several orders of magnitude faster than existing BDD-based state-space generators, the algorithm's required peak memory is often close to the nal memory needed for storing the overall state spaces.
Optimal Solution for an Engineering Applications Using Modified Artificial Immune System
NASA Astrophysics Data System (ADS)
Padmanabhan, S.; Chandrasekaran, M.; Ganesan, S.; patan, Mahamed Naveed Khan; Navakanth, Polina
2017-03-01
An Engineering optimization leads a essential role in several engineering application areas like process design, product design, re-engineering and new product development, etc. In engineering, an awfully best answer is achieved by comparison to some completely different solutions by utilization previous downside information. An optimization algorithms provide systematic associate degreed economical ways that within which of constructing and comparison new design solutions so on understand at best vogue, thus on best solution efficiency and acquire the foremost wonderful design impact. In this paper, a new evolutionary based Modified Artificial Immune System (MAIS) algorithm used to optimize an engineering application of gear drive design. The results are compared with existing design.
Solution of quadratic matrix equations for free vibration analysis of structures.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
An efficient digital computer procedure and the related numerical algorithm are presented herein for the solution of quadratic matrix equations associated with free vibration analysis of structures. Such a procedure enables accurate and economical analysis of natural frequencies and associated modes of discretized structures. The numerically stable algorithm is based on the Sturm sequence method, which fully exploits the banded form of associated stiffness and mass matrices. The related computer program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be substantially more accurate and economical than other existing procedures of such analysis. Numerical examples are presented for two structures - a cantilever beam and a semicircular arch.
Polynomial-Time Algorithms for Building a Consensus MUL-Tree
Cui, Yun; Jansson, Jesper
2012-01-01
Abstract A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host–parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists. PMID:22963134
Handling Dynamic Weights in Weighted Frequent Pattern Mining
NASA Astrophysics Data System (ADS)
Ahmed, Chowdhury Farhan; Tanbeer, Syed Khairuzzaman; Jeong, Byeong-Soo; Lee, Young-Koo
Even though weighted frequent pattern (WFP) mining is more effective than traditional frequent pattern mining because it can consider different semantic significances (weights) of items, existing WFP algorithms assume that each item has a fixed weight. But in real world scenarios, the weight (price or significance) of an item can vary with time. Reflecting these changes in item weight is necessary in several mining applications, such as retail market data analysis and web click stream analysis. In this paper, we introduce the concept of a dynamic weight for each item, and propose an algorithm, DWFPM (dynamic weighted frequent pattern mining), that makes use of this concept. Our algorithm can address situations where the weight (price or significance) of an item varies dynamically. It exploits a pattern growth mining technique to avoid the level-wise candidate set generation-and-test methodology. Furthermore, it requires only one database scan, so it is eligible for use in stream data mining. An extensive performance analysis shows that our algorithm is efficient and scalable for WFP mining using dynamic weights.
CNN universal machine as classificaton platform: an art-like clustering algorithm.
Bálya, David
2003-12-01
Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.
A mixed-mode traffic assignment model with new time-flow impedance function
NASA Astrophysics Data System (ADS)
Lin, Gui-Hua; Hu, Yu; Zou, Yuan-Yang
2018-01-01
Recently, with the wide adoption of electric vehicles, transportation network has shown different characteristics and been further developed. In this paper, we present a new time-flow impedance function, which may be more realistic than the existing time-flow impedance functions. Based on this new impedance function, we present an optimization model for a mixed-mode traffic network in which battery electric vehicles (BEVs) and gasoline vehicles (GVs) are chosen. We suggest two approaches to handle the model: One is to use the interior point (IP) algorithm and the other is to employ the sequential quadratic programming (SQP) algorithm. Three numerical examples are presented to illustrate the efficiency of these approaches. In particular, our numerical results show that more travelers prefer to choosing BEVs when the distance limit of BEVs is long enough and the unit operating cost of GVs is higher than that of BEVs, and the SQP algorithm is faster than the IP algorithm.
A genetic algorithm approach in interface and surface structure optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Jian
The thesis is divided into two parts. In the first part a global optimization method is developed for the interface and surface structures optimization. Two prototype systems are chosen to be studied. One is Si[001] symmetric tilted grain boundaries and the other is Ag/Au induced Si(111) surface. It is found that Genetic Algorithm is very efficient in finding lowest energy structures in both cases. Not only existing structures in the experiments can be reproduced, but also many new structures can be predicted using Genetic Algorithm. Thus it is shown that Genetic Algorithm is a extremely powerful tool for the materialmore » structures predictions. The second part of the thesis is devoted to the explanation of an experimental observation of thermal radiation from three-dimensional tungsten photonic crystal structures. The experimental results seems astounding and confusing, yet the theoretical models in the paper revealed the physics insight behind the phenomena and can well reproduced the experimental results.« less
Efficient Delaunay Tessellation through K-D Tree Decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Peterka, Tom
Delaunay tessellations are fundamental data structures in computational geometry. They are important in data analysis, where they can represent the geometry of a point set or approximate its density. The algorithms for computing these tessellations at scale perform poorly when the input data is unbalanced. We investigate the use of k-d trees to evenly distribute points among processes and compare two strategies for picking split points between domain regions. Because resulting point distributions no longer satisfy the assumptions of existing parallel Delaunay algorithms, we develop a new parallel algorithm that adapts to its input and prove its correctness. We evaluatemore » the new algorithm using two late-stage cosmology datasets. The new running times are up to 50 times faster using k-d tree compared with regular grid decomposition. Moreover, in the unbalanced data sets, decomposing the domain into a k-d tree is up to five times faster than decomposing it into a regular grid.« less
A simple, remote, video based breathing monitor.
Regev, Nir; Wulich, Dov
2017-07-01
Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.
Pinning impulsive control algorithms for complex network
NASA Astrophysics Data System (ADS)
Sun, Wen; Lü, Jinhu; Chen, Shihua; Yu, Xinghuo
2014-03-01
In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.
Polynomial-time algorithms for building a consensus MUL-tree.
Cui, Yun; Jansson, Jesper; Sung, Wing-Kin
2012-09-01
A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host-parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists.
Penalty Dynamic Programming Algorithm for Dim Targets Detection in Sensor Systems
Huang, Dayu; Xue, Anke; Guo, Yunfei
2012-01-01
In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations. PMID:22666074
Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David
2013-01-01
Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.
Memetic Algorithm-Based Multi-Objective Coverage Optimization for Wireless Sensor Networks
Chen, Zhi; Li, Shuai; Yue, Wenjing
2014-01-01
Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms. PMID:25360579
Memetic algorithm-based multi-objective coverage optimization for wireless sensor networks.
Chen, Zhi; Li, Shuai; Yue, Wenjing
2014-10-30
Maintaining effective coverage and extending the network lifetime as much as possible has become one of the most critical issues in the coverage of WSNs. In this paper, we propose a multi-objective coverage optimization algorithm for WSNs, namely MOCADMA, which models the coverage control of WSNs as the multi-objective optimization problem. MOCADMA uses a memetic algorithm with a dynamic local search strategy to optimize the coverage of WSNs and achieve the objectives such as high network coverage, effective node utilization and more residual energy. In MOCADMA, the alternative solutions are represented as the chromosomes in matrix form, and the optimal solutions are selected through numerous iterations of the evolution process, including selection, crossover, mutation, local enhancement, and fitness evaluation. The experiment and evaluation results show MOCADMA can have good capabilities in maintaining the sensing coverage, achieve higher network coverage while improving the energy efficiency and effectively prolonging the network lifetime, and have a significant improvement over some existing algorithms.
Multi-label spacecraft electrical signal classification method based on DBN and random forest
Li, Ke; Yu, Nan; Li, Pengfei; Song, Shimin; Wu, Yalei; Li, Yang; Liu, Meng
2017-01-01
In spacecraft electrical signal characteristic data, there exists a large amount of data with high-dimensional features, a high computational complexity degree, and a low rate of identification problems, which causes great difficulty in fault diagnosis of spacecraft electronic load systems. This paper proposes a feature extraction method that is based on deep belief networks (DBN) and a classification method that is based on the random forest (RF) algorithm; The proposed algorithm mainly employs a multi-layer neural network to reduce the dimension of the original data, and then, classification is applied. Firstly, we use the method of wavelet denoising, which was used to pre-process the data. Secondly, the deep belief network is used to reduce the feature dimension and improve the rate of classification for the electrical characteristics data. Finally, we used the random forest algorithm to classify the data and comparing it with other algorithms. The experimental results show that compared with other algorithms, the proposed method shows excellent performance in terms of accuracy, computational efficiency, and stability in addressing spacecraft electrical signal data. PMID:28486479
Multi-label spacecraft electrical signal classification method based on DBN and random forest.
Li, Ke; Yu, Nan; Li, Pengfei; Song, Shimin; Wu, Yalei; Li, Yang; Liu, Meng
2017-01-01
In spacecraft electrical signal characteristic data, there exists a large amount of data with high-dimensional features, a high computational complexity degree, and a low rate of identification problems, which causes great difficulty in fault diagnosis of spacecraft electronic load systems. This paper proposes a feature extraction method that is based on deep belief networks (DBN) and a classification method that is based on the random forest (RF) algorithm; The proposed algorithm mainly employs a multi-layer neural network to reduce the dimension of the original data, and then, classification is applied. Firstly, we use the method of wavelet denoising, which was used to pre-process the data. Secondly, the deep belief network is used to reduce the feature dimension and improve the rate of classification for the electrical characteristics data. Finally, we used the random forest algorithm to classify the data and comparing it with other algorithms. The experimental results show that compared with other algorithms, the proposed method shows excellent performance in terms of accuracy, computational efficiency, and stability in addressing spacecraft electrical signal data.
Optimal integer resolution for attitude determination using global positioning system signals
NASA Technical Reports Server (NTRS)
Crassidis, John L.; Markley, F. Landis; Lightsey, E. Glenn
1998-01-01
In this paper, a new motion-based algorithm for GPS integer ambiguity resolution is derived. The first step of this algorithm converts the reference sightline vectors into body frame vectors. This is accomplished by an optimal vectorized transformation of the phase difference measurements. The result of this transformation leads to the conversion of the integer ambiguities to vectorized biases. This essentially converts the problem to the familiar magnetometer-bias determination problem, for which an optimal and efficient solution exists. Also, the formulation in this paper is re-derived to provide a sequential estimate, so that a suitable stopping condition can be found during the vehicle motion. The advantages of the new algorithm include: it does not require an a-priori estimate of the vehicle's attitude; it provides an inherent integrity check using a covariance-type expression; and it can sequentially estimate the ambiguities during the vehicle motion. The only disadvantage of the new algorithm is that it requires at least three non-coplanar baselines. The performance of the new algorithm is tested on a dynamic hardware simulator.
Model-Checking with Edge-Valued Decision Diagrams
NASA Technical Reports Server (NTRS)
Roux, Pierre; Siminiceanu, Radu I.
2010-01-01
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD.
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.; ...
2016-04-01
Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xinya; Deng, Zhiqun Daniel; Rauchenstein, Lynn T.
Locating the position of fixed or mobile sources (i.e., transmitters) based on received measurements from sensors is an important research area that is attracting much research interest. In this paper, we present localization algorithms using time of arrivals (TOA) and time difference of arrivals (TDOA) to achieve high accuracy under line-of-sight conditions. The circular (TOA) and hyperbolic (TDOA) location systems both use nonlinear equations that relate the locations of the sensors and tracked objects. These nonlinear equations can develop accuracy challenges because of the existence of measurement errors and efficiency challenges that lead to high computational burdens. Least squares-based andmore » maximum likelihood-based algorithms have become the most popular categories of location estimators. We also summarize the advantages and disadvantages of various positioning algorithms. By improving measurement techniques and localization algorithms, localization applications can be extended into the signal-processing-related domains of radar, sonar, the Global Positioning System, wireless sensor networks, underwater animal tracking, mobile communications, and multimedia.« less
Formulating face verification with semidefinite programming.
Yan, Shuicheng; Liu, Jianzhuang; Tang, Xiaoou; Huang, Thomas S
2007-11-01
This paper presents a unified solution to three unsolved problems existing in face verification with subspace learning techniques: selection of verification threshold, automatic determination of subspace dimension, and deducing feature fusing weights. In contrast to previous algorithms which search for the projection matrix directly, our new algorithm investigates a similarity metric matrix (SMM). With a certain verification threshold, this matrix is learned by a semidefinite programming approach, along with the constraints of the kindred pairs with similarity larger than the threshold, and inhomogeneous pairs with similarity smaller than the threshold. Then, the subspace dimension and the feature fusing weights are simultaneously inferred from the singular value decomposition of the derived SMM. In addition, the weighted and tensor extensions are proposed to further improve the algorithmic effectiveness and efficiency, respectively. Essentially, the verification is conducted within an affine subspace in this new algorithm and is, hence, called the affine subspace for verification (ASV). Extensive experiments show that the ASV can achieve encouraging face verification accuracy in comparison to other subspace algorithms, even without the need to explore any parameters.
Motion-adaptive spatio-temporal regularization for accelerated dynamic MRI.
Asif, M Salman; Hamilton, Lei; Brummer, Marijn; Romberg, Justin
2013-09-01
Accelerated magnetic resonance imaging techniques reduce signal acquisition time by undersampling k-space. A fundamental problem in accelerated magnetic resonance imaging is the recovery of quality images from undersampled k-space data. Current state-of-the-art recovery algorithms exploit the spatial and temporal structures in underlying images to improve the reconstruction quality. In recent years, compressed sensing theory has helped formulate mathematical principles and conditions that ensure recovery of (structured) sparse signals from undersampled, incoherent measurements. In this article, a new recovery algorithm, motion-adaptive spatio-temporal regularization, is presented that uses spatial and temporal structured sparsity of MR images in the compressed sensing framework to recover dynamic MR images from highly undersampled k-space data. In contrast to existing algorithms, our proposed algorithm models temporal sparsity using motion-adaptive linear transformations between neighboring images. The efficiency of motion-adaptive spatio-temporal regularization is demonstrated with experiments on cardiac magnetic resonance imaging for a range of reduction factors. Results are also compared with k-t FOCUSS with motion estimation and compensation-another recently proposed recovery algorithm for dynamic magnetic resonance imaging. . Copyright © 2012 Wiley Periodicals, Inc.
Loeffler, Troy David; Chan, Henry; Narayanan, Badri; Cherukara, Mathew J; Gray, Stephen K; Sankaranarayanan, Subramanian K R S
2018-06-20
Coarse-grained molecular dynamics (MD) simulations represent a powerful approach to simulate longer time scale and larger length scale phenomena than those accessible to all-atom models. The gain in efficiency, however, comes at the cost of atomistic details. The reverse transformation, also known as back-mapping, of coarse grained beads into their atomistic constituents represents a major challenge. Most existing approaches are limited to specific molecules or specific force-fields and often rely on running a long time atomistic MD of the back-mapped configuration to arrive at an optimal solution. Such approaches are problematic when dealing with systems with high diffusion barriers. Here, we introduce a new extension of the configurational-bias-Monte-Carlo (CBMC) algorithm, which we term the crystalline-configurational-bias-Monte-Carlo (C-CBMC) algortihm, that allows rapid and efficient conversion of a coarse-grained model back into its atomistic representation. Although the method is generic, we use a coarse-grained water model as a representative example and demonstrate the back-mapping or reverse transformation for model systems ranging from the ice-liquid water interface to amorphous and crystalline ice configurations. A series of simulations using the TIP4P/Ice model are performed to compare the new CBMC method to several other standard Monte Carlo and Molecular Dynamics based back-mapping techniques. In all the cases, the C-CBMC algorithm is able to find optimal hydrogen bonded configuration many thousand evaluations/steps sooner than the other methods compared within this paper. For crystalline ice structures such as a hexagonal, cubic, and cubic-hexagonal stacking disorder structures, the C-CBMC was able to find structures that were between 0.05 and 0.1 eV/water molecule lower in energy than the ground state energies predicted by the other methods. Detailed analysis of the atomistic structures show a significantly better global hydrogen positioning when contrasted with the existing simpler back-mapping methods. Our results demonstrate the efficiency and efficacy of our new back-mapping approach, especially for crystalline systems where simple force-field based relaxations have a tendency to get trapped in local minima.
Tactical Synthesis Of Efficient Global Search Algorithms
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2009-01-01
Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.
NASA Astrophysics Data System (ADS)
Iyer, Sridhar
2016-12-01
The ever-increasing global Internet traffic will inevitably lead to a serious upgrade of the current optical networks' capacity. The legacy infrastructure can be enhanced not only by increasing the capacity but also by adopting advance modulation formats, having increased spectral efficiency at higher data rate. In a transparent mixed-line-rate (MLR) optical network, different line rates, on different wavelengths, can coexist on the same fiber. Migration to data rates higher than 10 Gbps requires the implementation of phase modulation schemes. However, the co-existing on-off keying (OOK) channels cause critical physical layer impairments (PLIs) to the phase modulated channels, mainly due to cross-phase modulation (XPM), which in turn limits the network's performance. In order to mitigate this effect, a more sophisticated PLI-Routing and Wavelength Assignment (PLI-RWA) scheme needs to be adopted. In this paper, we investigate the critical impairment for each data rate and the way it affects the quality of transmission (QoT). In view of the aforementioned, we present a novel dynamic PLI-RWA algorithm for MLR optical networks. The proposed algorithm is compared through simulations with the shortest path and minimum hop routing schemes. The simulation results show that performance of the proposed algorithm is better than the existing schemes.
Efficient volumetric estimation from plenoptic data
NASA Astrophysics Data System (ADS)
Anglin, Paul; Reeves, Stanley J.; Thurow, Brian S.
2013-03-01
The commercial release of the Lytro camera, and greater availability of plenoptic imaging systems in general, have given the image processing community cost-effective tools for light-field imaging. While this data is most commonly used to generate planar images at arbitrary focal depths, reconstruction of volumetric fields is also possible. Similarly, deconvolution is a technique that is conventionally used in planar image reconstruction, or deblurring, algorithms. However, when leveraged with the ability of a light-field camera to quickly reproduce multiple focal planes within an imaged volume, deconvolution offers a computationally efficient method of volumetric reconstruction. Related research has shown than light-field imaging systems in conjunction with tomographic reconstruction techniques are also capable of estimating the imaged volume and have been successfully applied to particle image velocimetry (PIV). However, while tomographic volumetric estimation through algorithms such as multiplicative algebraic reconstruction techniques (MART) have proven to be highly accurate, they are computationally intensive. In this paper, the reconstruction problem is shown to be solvable by deconvolution. Deconvolution offers significant improvement in computational efficiency through the use of fast Fourier transforms (FFTs) when compared to other tomographic methods. This work describes a deconvolution algorithm designed to reconstruct a 3-D particle field from simulated plenoptic data. A 3-D extension of existing 2-D FFT-based refocusing techniques is presented to further improve efficiency when computing object focal stacks and system point spread functions (PSF). Reconstruction artifacts are identified; their underlying source and methods of mitigation are explored where possible, and reconstructions of simulated particle fields are provided.
An Efficient Statistical Computation Technique for Health Care Big Data using R
NASA Astrophysics Data System (ADS)
Sushma Rani, N.; Srinivasa Rao, P., Dr; Parimala, P.
2017-08-01
Due to the changes in living conditions and other factors many critical health related problems are arising. The diagnosis of the problem at earlier stages will increase the chances of survival and fast recovery. This reduces the time of recovery and the cost associated for the treatment. One such medical related issue is cancer and breast cancer has been identified as the second leading cause of cancer death. If detected in the early stage it can be cured. Once a patient is detected with breast cancer tumor, it should be classified whether it is cancerous or non-cancerous. So the paper uses k-nearest neighbors(KNN) algorithm which is one of the simplest machine learning algorithms and is an instance-based learning algorithm to classify the data. Day-to -day new records are added which leds to increase in the data to be classified and this tends to be big data problem. The algorithm is implemented in R whichis the most popular platform applied to machine learning algorithms for statistical computing. Experimentation is conducted by using various classification evaluation metric onvarious values of k. The results show that the KNN algorithm out performes better than existing models.
Clustering algorithm for determining community structure in large networks
NASA Astrophysics Data System (ADS)
Pujol, Josep M.; Béjar, Javier; Delgado, Jordi
2006-07-01
We propose an algorithm to find the community structure in complex networks based on the combination of spectral analysis and modularity optimization. The clustering produced by our algorithm is as accurate as the best algorithms on the literature of modularity optimization; however, the main asset of the algorithm is its efficiency. The best match for our algorithm is Newman’s fast algorithm, which is the reference algorithm for clustering in large networks due to its efficiency. When both algorithms are compared, our algorithm outperforms the fast algorithm both in efficiency and accuracy of the clustering, in terms of modularity. Thus, the results suggest that the proposed algorithm is a good choice to analyze the community structure of medium and large networks in the range of tens and hundreds of thousand vertices.
Joint histogram-based cost aggregation for stereo matching.
Min, Dongbo; Lu, Jiangbo; Do, Minh N
2013-10-01
This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.
A Survey of Algorithms to Efficiently Reconcile Sets of Infomation
2015-08-01
instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send... data elements which constitute a Navy Command and Control (C2) data store. The problem is to determine which information must be sent between Host A...components of MTC2 is the data layer which abstracts the implementation of the underlying data store from MTC2 applications. In particular, the MTC2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.
Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,”more » has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer« less
Heterogeneous Compression of Large Collections of Evolutionary Trees.
Matthews, Suzanne J
2015-01-01
Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.
Online boosting for vehicle detection.
Chang, Wen-Chung; Cho, Chih-Wei
2010-06-01
This paper presents a real-time vision-based vehicle detection system employing an online boosting algorithm. It is an online AdaBoost approach for a cascade of strong classifiers instead of a single strong classifier. Most existing cascades of classifiers must be trained offline and cannot effectively be updated when online tuning is required. The idea is to develop a cascade of strong classifiers for vehicle detection that is capable of being online trained in response to changing traffic environments. To make the online algorithm tractable, the proposed system must efficiently tune parameters based on incoming images and up-to-date performance of each weak classifier. The proposed online boosting method can improve system adaptability and accuracy to deal with novel types of vehicles and unfamiliar environments, whereas existing offline methods rely much more on extensive training processes to reach comparable results and cannot further be updated online. Our approach has been successfully validated in real traffic environments by performing experiments with an onboard charge-coupled-device camera in a roadway vehicle.
Historical data learning based dynamic LSP routing for overlay IP/MPLS over WDM networks
NASA Astrophysics Data System (ADS)
Yu, Xiaojun; Xiao, Gaoxi; Cheng, Tee Hiang
2013-08-01
Overlay IP/MPLS over WDM network is a promising network architecture starting to gain wide deployments recently. A desirable feature of such a network is to achieve efficient routing with limited information exchanges between the IP/MPLS and the WDM layers. This paper studies dynamic label switched path (LSP) routing in the overlay IP/MPLS over WDM networks. To enhance network performance while maintaining its simplicity, we propose to learn from the historical data of lightpath setup costs maintained by the IP-layer integrated service provider (ISP) when making routing decisions. Using a novel historical data learning scheme for logical link cost estimation, we develop a new dynamic LSP routing method named Existing Link First (ELF) algorithm. Simulation results show that the proposed algorithm significantly outperforms the existing ones under different traffic loads, with either limited or unlimited numbers of optical ports. Effects of the number of candidate routes, add/drop ratio and the amount of historical data are also evaluated.
Linguistic hesitant fuzzy multi-criteria decision-making method based on evidential reasoning
NASA Astrophysics Data System (ADS)
Zhou, Huan; Wang, Jian-qiang; Zhang, Hong-yu; Chen, Xiao-hong
2016-01-01
Linguistic hesitant fuzzy sets (LHFSs), which can be used to represent decision-makers' qualitative preferences as well as reflect their hesitancy and inconsistency, have attracted a great deal of attention due to their flexibility and efficiency. This paper focuses on a multi-criteria decision-making approach that combines LHFSs with the evidential reasoning (ER) method. After reviewing existing studies of LHFSs, a new order relationship and Hamming distance between LHFSs are introduced and some linguistic scale functions are applied. Then, the ER algorithm is used to aggregate the distributed assessment of each alternative. Subsequently, the set of aggregated alternatives on criteria are further aggregated to get the overall value of each alternative. Furthermore, a nonlinear programming model is developed and genetic algorithms are used to obtain the optimal weights of the criteria. Finally, two illustrative examples are provided to show the feasibility and usability of the method, and comparison analysis with the existing method is made.
On Federated and Proof Of Validation Based Consensus Algorithms In Blockchain
NASA Astrophysics Data System (ADS)
Ambili, K. N.; Sindhu, M.; Sethumadhavan, M.
2017-08-01
Almost all real world activities have been digitized and there are various client server architecture based systems in place to handle them. These are all based on trust on third parties. There is an active attempt to successfully implement blockchain based systems which ensures that the IT systems are immutable, double spending is avoided and cryptographic strength is provided to them. A successful implementation of blockchain as backbone of existing information technology systems is bound to eliminate various types of fraud and ensure quicker delivery of the item on trade. To adapt IT systems to blockchain architecture, an efficient consensus algorithm need to be designed. Blockchain based on proof of work first came up as the backbone of cryptocurrency. After this, several other methods with variety of interesting features have come up. In this paper, we conduct a survey on existing attempts to achieve consensus in block chain. A federated consensus method and a proof of validation method are being compared.
Binary Interval Search: a scalable algorithm for counting interval intersections.
Layer, Ryan M; Skadron, Kevin; Robins, Gabriel; Hall, Ira M; Quinlan, Aaron R
2013-01-01
The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. https://github.com/arq5x/bits.
Secure Multicast Tree Structure Generation Method for Directed Diffusion Using A* Algorithms
NASA Astrophysics Data System (ADS)
Kim, Jin Myoung; Lee, Hae Young; Cho, Tae Ho
The application of wireless sensor networks to areas such as combat field surveillance, terrorist tracking, and highway traffic monitoring requires secure communication among the sensor nodes within the networks. Logical key hierarchy (LKH) is a tree based key management model which provides secure group communication. When a sensor node is added or evicted from the communication group, LKH updates the group key in order to ensure the security of the communications. In order to efficiently update the group key in directed diffusion, we propose a method for secure multicast tree structure generation, an extension to LKH that reduces the number of re-keying messages by considering the addition and eviction ratios of the history data. For the generation of the proposed key tree structure the A* algorithm is applied, in which the branching factor at each level can take on different value. The experiment results demonstrate the efficiency of the proposed key tree structure against the existing key tree structures of fixed branching factors.
An efficient 3D R-tree spatial index method for virtual geographic environments
NASA Astrophysics Data System (ADS)
Zhu, Qing; Gong, Jun; Zhang, Yeting
A three-dimensional (3D) spatial index is required for real time applications of integrated organization and management in virtual geographic environments of above ground, underground, indoor and outdoor objects. Being one of the most promising methods, the R-tree spatial index has been paid increasing attention in 3D geospatial database management. Since the existing R-tree methods are usually limited by their weakness of low efficiency, due to the critical overlap of sibling nodes and the uneven size of nodes, this paper introduces the k-means clustering method and employs the 3D overlap volume, 3D coverage volume and the minimum bounding box shape value of nodes as the integrative grouping criteria. A new spatial cluster grouping algorithm and R-tree insertion algorithm is then proposed. Experimental analysis on comparative performance of spatial indexing shows that by the new method the overlap of R-tree sibling nodes is minimized drastically and a balance in the volumes of the nodes is maintained.
Planetary-scale surface water detection from space
NASA Astrophysics Data System (ADS)
Donchyts, G.; Baart, F.; Winsemius, H.; Gorelick, N.
2017-12-01
Accurate, efficient and high-resolution methods of surface water detection are needed for a better water management. Datasets on surface water extent and dynamics are crucial for a better understanding of natural and human-made processes, and as an input data for hydrological and hydraulic models. In spite of considerable progress in the harmonization of freely available satellite data, producing accurate and efficient higher-level surface water data products remains very challenging. This presentation will provide an overview of existing methods for surface water extent and change detection from multitemporal and multi-sensor satellite imagery. An algorithm to detect surface water changes from multi-temporal satellite imagery will be demonstrated as well as its open-source implementation (http://aqua-monitor.deltares.nl). This algorithm was used to estimate global surface water changes at high spatial resolution. These changes include climate change, land reclamation, reservoir construction/decommissioning, erosion/accretion, and many other. This presentation will demonstrate how open satellite data and open platforms such as Google Earth Engine have helped with this research.
New knowledge-based genetic algorithm for excavator boom structural optimization
NASA Astrophysics Data System (ADS)
Hua, Haiyan; Lin, Shuwen
2014-03-01
Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.
A community detection algorithm based on structural similarity
NASA Astrophysics Data System (ADS)
Guo, Xuchao; Hao, Xia; Liu, Yaqiong; Zhang, Li; Wang, Lu
2017-09-01
In order to further improve the efficiency and accuracy of community detection algorithm, a new algorithm named SSTCA (the community detection algorithm based on structural similarity with threshold) is proposed. In this algorithm, the structural similarities are taken as the weights of edges, and the threshold k is considered to remove multiple edges whose weights are less than the threshold, and improve the computational efficiency. Tests were done on the Zachary’s network, Dolphins’ social network and Football dataset by the proposed algorithm, and compared with GN and SSNCA algorithm. The results show that the new algorithm is superior to other algorithms in accuracy for the dense networks and the operating efficiency is improved obviously.
Medical image processing on the GPU - past, present and future.
Eklund, Anders; Dufort, Paul; Forsberg, Daniel; LaConte, Stephen M
2013-12-01
Graphics processing units (GPUs) are used today in a wide range of applications, mainly because they can dramatically accelerate parallel computing, are affordable and energy efficient. In the field of medical imaging, GPUs are in some cases crucial for enabling practical use of computationally demanding algorithms. This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing GPU implementations. The review covers GPU acceleration of basic image processing operations (filtering, interpolation, histogram estimation and distance transforms), the most commonly used algorithms in medical imaging (image registration, image segmentation and image denoising) and algorithms that are specific to individual modalities (CT, PET, SPECT, MRI, fMRI, DTI, ultrasound, optical imaging and microscopy). The review ends by highlighting some future possibilities and challenges. Copyright © 2013 Elsevier B.V. All rights reserved.
Available Transfer Capability Determination Using Hybrid Evolutionary Algorithm
NASA Astrophysics Data System (ADS)
Jirapong, Peeraool; Ongsakul, Weerakorn
2008-10-01
This paper proposes a new hybrid evolutionary algorithm (HEA) based on evolutionary programming (EP), tabu search (TS), and simulated annealing (SA) to determine the available transfer capability (ATC) of power transactions between different control areas in deregulated power systems. The optimal power flow (OPF)-based ATC determination is used to evaluate the feasible maximum ATC value within real and reactive power generation limits, line thermal limits, voltage limits, and voltage and angle stability limits. The HEA approach simultaneously searches for real power generations except slack bus in a source area, real power loads in a sink area, and generation bus voltages to solve the OPF-based ATC problem. Test results on the modified IEEE 24-bus reliability test system (RTS) indicate that ATC determination by the HEA could enhance ATC far more than those from EP, TS, hybrid TS/SA, and improved EP (IEP) algorithms, leading to an efficient utilization of the existing transmission system.
Schwarz-Christoffel Conformal Mapping based Grid Generation for Global Oceanic Circulation Models
NASA Astrophysics Data System (ADS)
Xu, Shiming
2015-04-01
We propose new grid generation algorithms for global ocean general circulation models (OGCMs). Contrary to conventional, analytical forms based dipolar or tripolar grids, the new algorithm are based on Schwarz-Christoffel (SC) conformal mapping with prescribed boundary information. While dealing with the conventional grid design problem of pole relocation, it also addresses more advanced issues of computational efficiency and the new requirements on OGCM grids arisen from the recent trend of high-resolution and multi-scale modeling. The proposed grid generation algorithm could potentially achieve the alignment of grid lines to coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the generated grids are still orthogonal curvilinear, they can be readily 10 utilized in existing Bryan-Cox-Semtner type ocean models. The proposed methodology can also be applied to the grid generation task for regional ocean modeling when complex land-ocean distribution is present.
NASA Astrophysics Data System (ADS)
Indik, Nathaniel; Fehrmann, Henning; Harke, Franz; Krishnan, Badri; Nielsen, Alex B.
2018-06-01
Efficient multidimensional template placement is crucial in computationally intensive matched-filtering searches for gravitational waves (GWs). Here, we implement the neighboring cell algorithm (NCA) to improve the detection volume of an existing compact binary coalescence (CBC) template bank. This algorithm has already been successfully applied for a binary millisecond pulsar search in data from the Fermi satellite. It repositions templates from overdense regions to underdense regions and reduces the number of templates that would have been required by a stochastic method to achieve the same detection volume. Our method is readily generalizable to other CBC parameter spaces. Here we apply this method to the aligned-single-spin neutron star-black hole binary coalescence inspiral-merger-ringdown gravitational wave parameter space. We show that the template nudging algorithm can attain the equivalent effectualness of the stochastic method with 12% fewer templates.
Reflection symmetry detection using locally affine invariant edge correspondence.
Wang, Zhaozhong; Tang, Zesheng; Zhang, Xiao
2015-04-01
Reflection symmetry detection receives increasing attentions in recent years. The state-of-the-art algorithms mainly use the matching of intensity-based features (such as the SIFT) within a single image to find symmetry axes. This paper proposes a novel approach by establishing the correspondence of locally affine invariant edge-based features, which are superior to the intensity based in the aspects that it is insensitive to illumination variations, and applicable to textureless objects. The locally affine invariance is achieved by simple linear algebra for efficient and robust computations, making the algorithm suitable for detections under object distortions like perspective projection. Commonly used edge detectors and a voting process are, respectively, used before and after the edge description and matching steps to form a complete reflection detection pipeline. Experiments are performed using synthetic and real-world images with both multiple and single reflection symmetry axis. The test results are compared with existing algorithms to validate the proposed method.
Auction-based Security Game for Multiuser Cooperative Networks
NASA Astrophysics Data System (ADS)
Wang, An; Cai, Yueming; Yang, Wendong; Cheng, Yunpeng
2013-04-01
In this paper, we develop an auction-based algorithm to allocate the relay power efficiently to improve the system secrecy rate in a cooperative network, where several source-destination pairs and one cooperative relay are involved. On the one hand, the cooperative relay assists these pairs to transmit under a peak power constraint. On the other hand, the relay is untrusty and is also a passive eavesdropper. The whole auction process is completely distributed and no instantaneous channel state information exchange is needed. We also prove the existence and uniqueness of the Nash Equilibrium (NE) for the proposed power auction game. Moreover, the Pareto optimality is also validated. Simulation results show that our proposed auction-based algorithm can effectively improve the system secrecy rate. Besides, the proposed auction-based algorithm can converge to the unique NE point within a finite number of iterations. More interestingly, we also find that the proposed power auction mechanism is cheat-proof.
A Simple Method to Simultaneously Detect and Identify Spikes from Raw Extracellular Recordings.
Petrantonakis, Panagiotis C; Poirazi, Panayiota
2015-01-01
The ability to track when and which neurons fire in the vicinity of an electrode, in an efficient and reliable manner can revolutionize the neuroscience field. The current bottleneck lies in spike sorting algorithms; existing methods for detecting and discriminating the activity of multiple neurons rely on inefficient, multi-step processing of extracellular recordings. In this work, we show that a single-step processing of raw (unfiltered) extracellular signals is sufficient for both the detection and identification of active neurons, thus greatly simplifying and optimizing the spike sorting approach. The efficiency and reliability of our method is demonstrated in both real and simulated data.
Scaling Deep Learning on GPU and Knights Landing clusters
You, Yang; Buluc, Aydin; Demmel, James
2017-09-26
The speed of deep neural networks training has become a big bottleneck of deep learning research and development. For example, training GoogleNet by ImageNet dataset on one Nvidia K20 GPU needs 21 days. To speed up the training process, the current deep learning systems heavily rely on the hardware accelerators. However, these accelerators have limited on-chip memory compared with CPUs. To handle large datasets, they need to fetch data from either CPU memory or remote processors. We use both self-hosted Intel Knights Landing (KNL) clusters and multi-GPU clusters as our target platforms. From an algorithm aspect, current distributed machine learningmore » systems are mainly designed for cloud systems. These methods are asynchronous because of the slow network and high fault-tolerance requirement on cloud systems. We focus on Elastic Averaging SGD (EASGD) to design algorithms for HPC clusters. Original EASGD used round-robin method for communication and updating. The communication is ordered by the machine rank ID, which is inefficient on HPC clusters. First, we redesign four efficient algorithms for HPC systems to improve EASGD's poor scaling on clusters. Async EASGD, Async MEASGD, and Hogwild EASGD are faster \\textcolor{black}{than} their existing counterparts (Async SGD, Async MSGD, and Hogwild SGD, resp.) in all the comparisons. Finally, we design Sync EASGD, which ties for the best performance among all the methods while being deterministic. In addition to the algorithmic improvements, we use some system-algorithm codesign techniques to scale up the algorithms. By reducing the percentage of communication from 87% to 14%, our Sync EASGD achieves 5.3x speedup over original EASGD on the same platform. We get 91.5% weak scaling efficiency on 4253 KNL cores, which is higher than the state-of-the-art implementation.« less
Scaling Deep Learning on GPU and Knights Landing clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Buluc, Aydin; Demmel, James
The speed of deep neural networks training has become a big bottleneck of deep learning research and development. For example, training GoogleNet by ImageNet dataset on one Nvidia K20 GPU needs 21 days. To speed up the training process, the current deep learning systems heavily rely on the hardware accelerators. However, these accelerators have limited on-chip memory compared with CPUs. To handle large datasets, they need to fetch data from either CPU memory or remote processors. We use both self-hosted Intel Knights Landing (KNL) clusters and multi-GPU clusters as our target platforms. From an algorithm aspect, current distributed machine learningmore » systems are mainly designed for cloud systems. These methods are asynchronous because of the slow network and high fault-tolerance requirement on cloud systems. We focus on Elastic Averaging SGD (EASGD) to design algorithms for HPC clusters. Original EASGD used round-robin method for communication and updating. The communication is ordered by the machine rank ID, which is inefficient on HPC clusters. First, we redesign four efficient algorithms for HPC systems to improve EASGD's poor scaling on clusters. Async EASGD, Async MEASGD, and Hogwild EASGD are faster \\textcolor{black}{than} their existing counterparts (Async SGD, Async MSGD, and Hogwild SGD, resp.) in all the comparisons. Finally, we design Sync EASGD, which ties for the best performance among all the methods while being deterministic. In addition to the algorithmic improvements, we use some system-algorithm codesign techniques to scale up the algorithms. By reducing the percentage of communication from 87% to 14%, our Sync EASGD achieves 5.3x speedup over original EASGD on the same platform. We get 91.5% weak scaling efficiency on 4253 KNL cores, which is higher than the state-of-the-art implementation.« less
Gao, Wei; Zhang, Ya; Wang, Jianguo
2014-01-01
The integrated navigation system with strapdown inertial navigation system (SINS), Beidou (BD) receiver and Doppler velocity log (DVL) can be used in marine applications owing to the fact that the redundant and complementary information from different sensors can markedly improve the system accuracy. However, the existence of multisensor asynchrony will introduce errors into the system. In order to deal with the problem, conventionally the sampling interval is subdivided, which increases the computational complexity. In this paper, an innovative integrated navigation algorithm based on a Cubature Kalman filter (CKF) is proposed correspondingly. A nonlinear system model and observation model for the SINS/BD/DVL integrated system are established to more accurately describe the system. By taking multi-sensor asynchronization into account, a new sampling principle is proposed to make the best use of each sensor's information. Further, CKF is introduced in this new algorithm to enable the improvement of the filtering accuracy. The performance of this new algorithm has been examined through numerical simulations. The results have shown that the positional error can be effectively reduced with the new integrated navigation algorithm. Compared with the traditional algorithm based on EKF, the accuracy of the SINS/BD/DVL integrated navigation system is improved, making the proposed nonlinear integrated navigation algorithm feasible and efficient. PMID:24434842
Multi-agent systems design for aerospace applications
NASA Astrophysics Data System (ADS)
Waslander, Steven L.
2007-12-01
Engineering systems with independent decision makers are becoming increasingly prevalent and present many challenges in coordinating actions to achieve systems goals. In particular, this work investigates the applications of air traffic flow control and autonomous vehicles as motivation to define algorithms that allow agents to agree to safe, efficient and equitable solutions in a distributed manner. To ensure system requirements will be satisfied in practice, each method is evaluated for a specific model of agent behavior, be it cooperative or non-cooperative. The air traffic flow control problem is investigated from the point of view of the airlines, whose costs are directly affected by resource allocation decisions made by the Federal Aviation Administration in order to mitigate traffic disruptions caused by weather. Airlines are first modeled as cooperative, and a distributed algorithm is presented with various global cost metrics which balance efficient and equitable use of resources differently. Next, a competitive airline model is assumed and two market mechanisms are developed for allocating contested airspace resources. The resource market mechanism provides a solution for which convergence to an efficient solution can be guaranteed, and each airline will improve on the solution that would occur without its inclusion in the decision process. A lump-sum market is then introduced as an alternative mechanism, for which efficiency loss bounds exist if airlines attempt to manipulate prices. Initial convergence results for lump-sum markets are presented for simplified problems with a single resource. To validate these algorithms, two air traffic flow models are developed which extend previous techniques, the first a convenient convex model made possible by assuming constant velocity flow, and the second a more complex flow model with full inflow, velocity and rerouting control. Autonomous vehicle teams are envisaged for many applications including mobile sensing and search and rescue. To enable these high-level applications, multi-vehicle collision avoidance is solved using a cooperative, decentralized algorithm. For the development of coordination algorithms for autonomous vehicles, the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC) is presented. This testbed provides significant advantages over other aerial testbeds due to its small size and low maintenance requirements.
CoGI: Towards Compressing Genomes as an Image.
Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong
2015-01-01
Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.
Kianmehr, Keivan; Alhajj, Reda
2008-09-01
In this study, we aim at building a classification framework, namely the CARSVM model, which integrates association rule mining and support vector machine (SVM). The goal is to benefit from advantages of both, the discriminative knowledge represented by class association rules and the classification power of the SVM algorithm, to construct an efficient and accurate classifier model that improves the interpretability problem of SVM as a traditional machine learning technique and overcomes the efficiency issues of associative classification algorithms. In our proposed framework: instead of using the original training set, a set of rule-based feature vectors, which are generated based on the discriminative ability of class association rules over the training samples, are presented to the learning component of the SVM algorithm. We show that rule-based feature vectors present a high-qualified source of discrimination knowledge that can impact substantially the prediction power of SVM and associative classification techniques. They provide users with more conveniences in terms of understandability and interpretability as well. We have used four datasets from UCI ML repository to evaluate the performance of the developed system in comparison with five well-known existing classification methods. Because of the importance and popularity of gene expression analysis as real world application of the classification model, we present an extension of CARSVM combined with feature selection to be applied to gene expression data. Then, we describe how this combination will provide biologists with an efficient and understandable classifier model. The reported test results and their biological interpretation demonstrate the applicability, efficiency and effectiveness of the proposed model. From the results, it can be concluded that a considerable increase in classification accuracy can be obtained when the rule-based feature vectors are integrated in the learning process of the SVM algorithm. In the context of applicability, according to the results obtained from gene expression analysis, we can conclude that the CARSVM system can be utilized in a variety of real world applications with some adjustments.
Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations
Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad
2013-01-01
Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194
NASA Technical Reports Server (NTRS)
Litt, Jonathan S.; Simo, Donald L.
2007-01-01
This paper presents a preliminary demonstration of an automated health assessment tool, capable of real-time on-board operation using existing engine control hardware. The tool allows operators to discern how rapidly individual turboshaft engines are degrading. As the compressor erodes, performance is lost, and with it the ability to generate power. Thus, such a tool would provide an instant assessment of the engine s fitness to perform a mission, and would help to pinpoint any abnormal wear or performance anomalies before they became serious, thereby decreasing uncertainty and enabling improved maintenance scheduling. The research described in the paper utilized test stand data from a T700-GE-401 turboshaft engine that underwent sand-ingestion testing to scale a model-based compressor efficiency degradation estimation algorithm. This algorithm was then applied to real-time Health Usage and Monitoring System (HUMS) data from a T700-GE-701C to track compressor efficiency on-line. The approach uses an optimal estimator called a Kalman filter. The filter is designed to estimate the compressor efficiency using only data from the engine s sensors as input.
Efficient selection of tagging single-nucleotide polymorphisms in multiple populations.
Howie, Bryan N; Carlson, Christopher S; Rieder, Mark J; Nickerson, Deborah A
2006-08-01
Common genetic polymorphism may explain a portion of the heritable risk for common diseases, so considerable effort has been devoted to finding and typing common single-nucleotide polymorphisms (SNPs) in the human genome. Many SNPs show correlated genotypes, or linkage disequilibrium (LD), suggesting that only a subset of all SNPs (known as tagging SNPs, or tagSNPs) need to be genotyped for disease association studies. Based on the genetic differences that exist among human populations, most tagSNP sets are defined in a single population and applied only in populations that are closely related. To improve the efficiency of multi-population analyses, we have developed an algorithm called MultiPop-TagSelect that finds a near-minimal union of population-specific tagSNP sets across an arbitrary number of populations. We present this approach as an extension of LD-select, a tagSNP selection method that uses a greedy algorithm to group SNPs into bins based on their pairwise association patterns, although the MultiPop-TagSelect algorithm could be used with any SNP tagging approach that allows choices between nearly equivalent SNPs. We evaluate the algorithm by considering tagSNP selection in candidate-gene resequencing data and lower density whole-chromosome data. Our analysis reveals that an exhaustive search is often intractable, while the developed algorithm can quickly and reliably find near-optimal solutions even for difficult tagSNP selection problems. Using populations of African, Asian, and European ancestry, we also show that an optimal multi-population set of tagSNPs can be substantially smaller (up to 44%) than a typical set obtained through independent or sequential selection.
Generalization of mixed multiscale finite element methods with applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C S
Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixedmore » multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii« less
Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.
Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo
2015-08-01
Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.
1987-03-31
processors . The symmetry-breaking algorithms give efficient ways to convert probabilistic algorithms to deterministic algorithms. Some of the...techniques have been applied to construct several efficient linear- processor algorithms for graph problems, including an O(lg* n)-time algorithm for (A + 1...On n-node graphs, the algorithm works in O(log 2 n) time using only n processors , in contrast to the previous best algorithm which used about n3
Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2011-01-01
A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.
Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines
NASA Astrophysics Data System (ADS)
Massa, Luca
A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.
Extreme Quantum Memory Advantage for Rare-Event Sampling
NASA Astrophysics Data System (ADS)
Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.
2018-02-01
We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.
NASA Astrophysics Data System (ADS)
Suess, Daniel; Rudnicki, Łukasz; maciel, Thiago O.; Gross, David
2017-09-01
The outcomes of quantum mechanical measurements are inherently random. It is therefore necessary to develop stringent methods for quantifying the degree of statistical uncertainty about the results of quantum experiments. For the particularly relevant task of quantum state tomography, it has been shown that a significant reduction in uncertainty can be achieved by taking the positivity of quantum states into account. However—the large number of partial results and heuristics notwithstanding—no efficient general algorithm is known that produces an optimal uncertainty region from experimental data, while making use of the prior constraint of positivity. Here, we provide a precise formulation of this problem and show that the general case is NP-hard. Our result leaves room for the existence of efficient approximate solutions, and therefore does not in itself imply that the practical task of quantum uncertainty quantification is intractable. However, it does show that there exists a non-trivial trade-off between optimality and computational efficiency for error regions. We prove two versions of the result: one for frequentist and one for Bayesian statistics.
Nonparametric Methods in Astronomy: Think, Regress, Observe—Pick Any Three
NASA Astrophysics Data System (ADS)
Steinhardt, Charles L.; Jermyn, Adam S.
2018-02-01
Telescopes are much more expensive than astronomers, so it is essential to minimize required sample sizes by using the most data-efficient statistical methods possible. However, the most commonly used model-independent techniques for finding the relationship between two variables in astronomy are flawed. In the worst case they can lead without warning to subtly yet catastrophically wrong results, and even in the best case they require more data than necessary. Unfortunately, there is no single best technique for nonparametric regression. Instead, we provide a guide for how astronomers can choose the best method for their specific problem and provide a python library with both wrappers for the most useful existing algorithms and implementations of two new algorithms developed here.
Identifying Vulnerabilities and Hardening Attack Graphs for Networked Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saha, Sudip; Vullinati, Anil K.; Halappanavar, Mahantesh
We investigate efficient security control methods for protecting against vulnerabilities in networked systems. A large number of interdependent vulnerabilities typically exist in the computing nodes of a cyber-system; as vulnerabilities get exploited, starting from low level ones, they open up the doors to more critical vulnerabilities. These cannot be understood just by a topological analysis of the network, and we use the attack graph abstraction of Dewri et al. to study these problems. In contrast to earlier approaches based on heuristics and evolutionary algorithms, we study rigorous methods for quantifying the inherent vulnerability and hardening cost for the system. Wemore » develop algorithms with provable approximation guarantees, and evaluate them for real and synthetic attack graphs.« less
An Approach for Peptide Identification by De Novo Sequencing of Mixture Spectra.
Liu, Yi; Ma, Bin; Zhang, Kaizhong; Lajoie, Gilles
2017-01-01
Mixture spectra occur quite frequently in a typical wet-lab mass spectrometry experiment, which result from the concurrent fragmentation of multiple precursors. The ability to efficiently and confidently identify mixture spectra is essential to alleviate the existent bottleneck of low mass spectra identification rate. However, most of the traditional computational methods are not suitable for interpreting mixture spectra, because they still take the assumption that the acquired spectra come from the fragmentation of a single precursor. In this manuscript, we formulate the mixture spectra de novo sequencing problem mathematically, and propose a dynamic programming algorithm for the problem. Additionally, we use both simulated and real mixture spectra data sets to verify the merits of the proposed algorithm.
Fast Fourier transform-based Retinex and alpha-rooting color image enhancement
NASA Astrophysics Data System (ADS)
Grigoryan, Artyom M.; Agaian, Sos S.; Gonzales, Analysa M.
2015-05-01
Efficiency in terms of both accuracy and speed is highly important in any system, especially when it comes to image processing. The purpose of this paper is to improve an existing implementation of multi-scale retinex (MSR) by utilizing the fast Fourier transforms (FFT) within the illumination estimation step of the algorithm to improve the speed at which Gaussian blurring filters were applied to the original input image. In addition, alpha-rooting can be used as a separate technique to achieve a sharper image in order to fuse its results with those of the retinex algorithm for the sake of achieving the best image possible as shown by the values of the considered color image enhancement measure (EMEC).
Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.
Newberg, Lee A
2008-08-15
A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.
Biological sequence compression algorithms.
Matsumoto, T; Sadakane, K; Imai, H
2000-01-01
Today, more and more DNA sequences are becoming available. The information about DNA sequences are stored in molecular biology databases. The size and importance of these databases will be bigger and bigger in the future, therefore this information must be stored or communicated efficiently. Furthermore, sequence compression can be used to define similarities between biological sequences. The standard compression algorithms such as gzip or compress cannot compress DNA sequences, but only expand them in size. On the other hand, CTW (Context Tree Weighting Method) can compress DNA sequences less than two bits per symbol. These algorithms do not use special structures of biological sequences. Two characteristic structures of DNA sequences are known. One is called palindromes or reverse complements and the other structure is approximate repeats. Several specific algorithms for DNA sequences that use these structures can compress them less than two bits per symbol. In this paper, we improve the CTW so that characteristic structures of DNA sequences are available. Before encoding the next symbol, the algorithm searches an approximate repeat and palindrome using hash and dynamic programming. If there is a palindrome or an approximate repeat with enough length then our algorithm represents it with length and distance. By using this preprocessing, a new program achieves a little higher compression ratio than that of existing DNA-oriented compression algorithms. We also describe new compression algorithm for protein sequences.
An efficient quantum algorithm for spectral estimation
NASA Astrophysics Data System (ADS)
Steffens, Adrian; Rebentrost, Patrick; Marvian, Iman; Eisert, Jens; Lloyd, Seth
2017-03-01
We develop an efficient quantum implementation of an important signal processing algorithm for line spectral estimation: the matrix pencil method, which determines the frequencies and damping factors of signals consisting of finite sums of exponentially damped sinusoids. Our algorithm provides a quantum speedup in a natural regime where the sampling rate is much higher than the number of sinusoid components. Along the way, we develop techniques that are expected to be useful for other quantum algorithms as well—consecutive phase estimations to efficiently make products of asymmetric low rank matrices classically accessible and an alternative method to efficiently exponentiate non-Hermitian matrices. Our algorithm features an efficient quantum-classical division of labor: the time-critical steps are implemented in quantum superposition, while an interjacent step, requiring much fewer parameters, can operate classically. We show that frequencies and damping factors can be obtained in time logarithmic in the number of sampling points, exponentially faster than known classical algorithms.
Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks
Li, Jiayin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal
2017-01-01
Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs. PMID:29117152
Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.
Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal
2017-11-08
Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .
Switching theory-based steganographic system for JPEG images
NASA Astrophysics Data System (ADS)
Cherukuri, Ravindranath C.; Agaian, Sos S.
2007-04-01
Cellular communications constitute a significant portion of the global telecommunications market. Therefore, the need for secured communication over a mobile platform has increased exponentially. Steganography is an art of hiding critical data into an innocuous signal, which provide answers to the above needs. The JPEG is one of commonly used format for storing and transmitting images on the web. In addition, the pictures captured using mobile cameras are in mostly in JPEG format. In this article, we introduce a switching theory based steganographic system for JPEG images which is applicable for mobile and computer platforms. The proposed algorithm uses the fact that energy distribution among the quantized AC coefficients varies from block to block and coefficient to coefficient. Existing approaches are effective with a part of these coefficients but when employed over all the coefficients they show there ineffectiveness. Therefore, we propose an approach that works each set of AC coefficients with different frame work thus enhancing the performance of the approach. The proposed system offers a high capacity and embedding efficiency simultaneously withstanding to simple statistical attacks. In addition, the embedded information could be retrieved without prior knowledge of the cover image. Based on simulation results, the proposed method demonstrates an improved embedding capacity over existing algorithms while maintaining a high embedding efficiency and preserving the statistics of the JPEG image after hiding information.
Improving transmission efficiency of large sequence alignment/map (SAM) files.
Sakib, Muhammad Nazmus; Tang, Jijun; Zheng, W Jim; Huang, Chin-Tser
2011-01-01
Research in bioinformatics primarily involves collection and analysis of a large volume of genomic data. Naturally, it demands efficient storage and transfer of this huge amount of data. In recent years, some research has been done to find efficient compression algorithms to reduce the size of various sequencing data. One way to improve the transmission time of large files is to apply a maximum lossless compression on them. In this paper, we present SAMZIP, a specialized encoding scheme, for sequence alignment data in SAM (Sequence Alignment/Map) format, which improves the compression ratio of existing compression tools available. In order to achieve this, we exploit the prior knowledge of the file format and specifications. Our experimental results show that our encoding scheme improves compression ratio, thereby reducing overall transmission time significantly.
High efficiency processing for reduced amplitude zones detection in the HRECG signal
NASA Astrophysics Data System (ADS)
Dugarte, N.; Álvarez, A.; Balacco, J.; Mercado, G.; Gonzalez, A.; Dugarte, E.; Olivares, A.
2016-04-01
Summary - This article presents part of a more detailed research proposed in the medium to long term, with the intention of establishing a new philosophy of electrocardiogram surface analysis. This research aims to find indicators of cardiovascular disease in its early stage that may go unnoticed with conventional electrocardiography. This paper reports the development of a software processing which collect some existing techniques and incorporates novel methods for detection of reduced amplitude zones (RAZ) in high resolution electrocardiographic signal (HRECG).The algorithm consists of three stages, an efficient processing for QRS detection, averaging filter using correlation techniques and a step for RAZ detecting. Preliminary results show the efficiency of system and point to incorporation of techniques new using signal analysis with involving 12 leads.
Mixed Element Type Unstructured Grid Generation for Viscous Flow Applications
NASA Technical Reports Server (NTRS)
Marcum, David L.; Gaither, J. Adam
2000-01-01
A procedure is presented for efficient generation of high-quality unstructured grids suitable for CFD simulation of high Reynolds number viscous flow fields. Layers of anisotropic elements are generated by advancing along prescribed normals from solid boundaries. The points are generated such that either pentahedral or tetrahedral elements with an implied connectivity can be be directly recovered. As points are generated they are temporarily attached to a volume triangulation of the boundary points. This triangulation allows efficient local search algorithms to be used when checking merging layers, The existing advancing-front/local-reconnection procedure is used to generate isotropic elements outside of the anisotropic region. Results are presented for a variety of applications. The results demonstrate that high-quality anisotropic unstructured grids can be efficiently and consistently generated for complex configurations.
NASA Astrophysics Data System (ADS)
Mottaeva, Angela
2017-10-01
Article is devoted to the consideration of the existing approaches to energy saving. According to the author’s opinion,the system approach is not enough for the achievement of the goal of the increase in energy efficiency and economy of energy, which is quite relevant for the world scientific community today, when the mankind has reached the certain power and ecological threshold. The author reasons the need of programme-and-target approach to energy saving and the increase in energy efficiency of buildings. The problems of the energy saving in current conditions are revealed. The provisional algorithm of the programme-and-target approach to energy efficiency. The expected results from introduction of the programme-and-target approach are presented in the article.
Multiple sequence alignment using multi-objective based bacterial foraging optimization algorithm.
Rani, R Ranjani; Ramyachitra, D
2016-12-01
Multiple sequence alignment (MSA) is a widespread approach in computational biology and bioinformatics. MSA deals with how the sequences of nucleotides and amino acids are sequenced with possible alignment and minimum number of gaps between them, which directs to the functional, evolutionary and structural relationships among the sequences. Still the computation of MSA is a challenging task to provide an efficient accuracy and statistically significant results of alignments. In this work, the Bacterial Foraging Optimization Algorithm was employed to align the biological sequences which resulted in a non-dominated optimal solution. It employs Multi-objective, such as: Maximization of Similarity, Non-gap percentage, Conserved blocks and Minimization of gap penalty. BAliBASE 3.0 benchmark database was utilized to examine the proposed algorithm against other methods In this paper, two algorithms have been proposed: Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC) and Bacterial Foraging Optimization Algorithm. It was found that Hybrid Genetic Algorithm with Artificial Bee Colony performed better than the existing optimization algorithms. But still the conserved blocks were not obtained using GA-ABC. Then BFO was used for the alignment and the conserved blocks were obtained. The proposed Multi-Objective Bacterial Foraging Optimization Algorithm (MO-BFO) was compared with widely used MSA methods Clustal Omega, Kalign, MUSCLE, MAFFT, Genetic Algorithm (GA), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO) and Hybrid Genetic Algorithm with Artificial Bee Colony (GA-ABC). The final results show that the proposed MO-BFO algorithm yields better alignment than most widely used methods. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Synthesis of Greedy Algorithms Using Dominance Relations
NASA Technical Reports Server (NTRS)
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1991-01-01
Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.
NASA Astrophysics Data System (ADS)
Wojdyga, Krzysztof; Malicki, Marcin
2017-11-01
Constant strive to improve the energy efficiency forces carrying out activities aimed at reduction of energy consumption hence decreasing amount of contamination emissions to atmosphere. Cooling demand, both for air-conditioning and process cooling, plays an increasingly important role in the balance of Polish electricity generation and distribution system in summer. During recent years' demand for electricity during summer months has been steadily and significantly increasing leading to deficits of energy availability during particularly hot periods. This causes growing importance and interest in trigeneration power generation sources and heat recovery systems producing chilled water. Key component of such system is thermally driven chiller, mostly absorption, based on lithium-bromide and water mixture. Absorption cooling systems also exist in Poland as stand-alone systems, supplied with heating from various sources, generated solely for them or recovered as waste or useless energy. The publication presents a simple algorithm, designed to reduce the amount of heat for the supply of absorption chillers producing chilled water for the purposes of air conditioning by reducing the temperature of the cooling water, and its impact on decreasing emissions of harmful substances into the atmosphere. Scale of environmental advantages has been rated for specific sources what enabled evaluation and estimation of simple algorithm implementation to sources existing nationally.
General subspace learning with corrupted training data via graph embedding.
Bao, Bing-Kun; Liu, Guangcan; Hong, Richang; Yan, Shuicheng; Xu, Changsheng
2013-11-01
We address the following subspace learning problem: supposing we are given a set of labeled, corrupted training data points, how to learn the underlying subspace, which contains three components: an intrinsic subspace that captures certain desired properties of a data set, a penalty subspace that fits the undesired properties of the data, and an error container that models the gross corruptions possibly existing in the data. Given a set of data points, these three components can be learned by solving a nuclear norm regularized optimization problem, which is convex and can be efficiently solved in polynomial time. Using the method as a tool, we propose a new discriminant analysis (i.e., supervised subspace learning) algorithm called Corruptions Tolerant Discriminant Analysis (CTDA), in which the intrinsic subspace is used to capture the features with high within-class similarity, the penalty subspace takes the role of modeling the undesired features with high between-class similarity, and the error container takes charge of fitting the possible corruptions in the data. We show that CTDA can well handle the gross corruptions possibly existing in the training data, whereas previous linear discriminant analysis algorithms arguably fail in such a setting. Extensive experiments conducted on two benchmark human face data sets and one object recognition data set show that CTDA outperforms the related algorithms.
Woldegebriel, Michael; Derks, Eduard
2017-01-17
In this work, a novel probabilistic untargeted feature detection algorithm for liquid chromatography coupled to high-resolution mass spectrometry (LC-HRMS) using artificial neural network (ANN) is presented. The feature detection process is approached as a pattern recognition problem, and thus, ANN was utilized as an efficient feature recognition tool. Unlike most existing feature detection algorithms, with this approach, any suspected chromatographic profile (i.e., shape of a peak) can easily be incorporated by training the network, avoiding the need to perform computationally expensive regression methods with specific mathematical models. In addition, with this method, we have shown that the high-resolution raw data can be fully utilized without applying any arbitrary thresholds or data reduction, therefore improving the sensitivity of the method for compound identification purposes. Furthermore, opposed to existing deterministic (binary) approaches, this method rather estimates the probability of a feature being present/absent at a given point of interest, thus giving chance for all data points to be propagated down the data analysis pipeline, weighed with their probability. The algorithm was tested with data sets generated from spiked samples in forensic and food safety context and has shown promising results by detecting features for all compounds in a computationally reasonable time.
Semi-implicit finite difference methods for three-dimensional shallow water flow
Casulli, Vincenzo; Cheng, Ralph T.
1992-01-01
A semi-implicit finite difference method for the numerical solution of three-dimensional shallow water flows is presented and discussed. The governing equations are the primitive three-dimensional turbulent mean flow equations where the pressure distribution in the vertical has been assumed to be hydrostatic. In the method of solution a minimal degree of implicitness has been adopted in such a fashion that the resulting algorithm is stable and gives a maximal computational efficiency at a minimal computational cost. At each time step the numerical method requires the solution of one large linear system which can be formally decomposed into a set of small three-diagonal systems coupled with one five-diagonal system. All these linear systems are symmetric and positive definite. Thus the existence and uniquencess of the numerical solution are assured. When only one vertical layer is specified, this method reduces as a special case to a semi-implicit scheme for solving the corresponding two-dimensional shallow water equations. The resulting two- and three-dimensional algorithm has been shown to be fast, accurate and mass-conservative and can also be applied to simulate flooding and drying of tidal mud-flats in conjunction with three-dimensional flows. Furthermore, the resulting algorithm is fully vectorizable for an efficient implementation on modern vector computers.
NASA Astrophysics Data System (ADS)
Wu, Ping; Liu, Kai; Zhang, Qian; Xue, Zhenwen; Li, Yongbao; Ning, Nannan; Yang, Xin; Li, Xingde; Tian, Jie
2012-12-01
Liver cancer is one of the most common malignant tumors worldwide. In order to enable the noninvasive detection of small liver tumors in mice, we present a parallel iterative shrinkage (PIS) algorithm for dual-modality tomography. It takes advantage of microcomputed tomography and multiview bioluminescence imaging, providing anatomical structure and bioluminescence intensity information to reconstruct the size and location of tumors. By incorporating prior knowledge of signal sparsity, we associate some mathematical strategies including specific smooth convex approximation, an iterative shrinkage operator, and affine subspace with the PIS method, which guarantees the accuracy, efficiency, and reliability for three-dimensional reconstruction. Then an in vivo experiment on the bead-implanted mouse has been performed to validate the feasibility of this method. The findings indicate that a tiny lesion less than 3 mm in diameter can be localized with a position bias no more than 1 mm the computational efficiency is one to three orders of magnitude faster than the existing algorithms; this approach is robust to the different regularization parameters and the lp norms. Finally, we have applied this algorithm to another in vivo experiment on an HCCLM3 orthotopic xenograft mouse model, which suggests the PIS method holds the promise for practical applications of whole-body cancer detection.
Automatic red eye correction and its quality metric
NASA Astrophysics Data System (ADS)
Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho
2008-01-01
The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.
Rapid equilibrium sampling initiated from nonequilibrium data.
Huang, Xuhui; Bowman, Gregory R; Bacallado, Sergio; Pande, Vijay S
2009-11-24
Simulating the conformational dynamics of biomolecules is extremely difficult due to the rugged nature of their free energy landscapes and multiple long-lived, or metastable, states. Generalized ensemble (GE) algorithms, which have become popular in recent years, attempt to facilitate crossing between states at low temperatures by inducing a random walk in temperature space. Enthalpic barriers may be crossed more easily at high temperatures; however, entropic barriers will become more significant. This poses a problem because the dominant barriers to conformational change are entropic for many biological systems, such as the short RNA hairpin studied here. We present a new efficient algorithm for conformational sampling, called the adaptive seeding method (ASM), which uses nonequilibrium GE simulations to identify the metastable states, and seeds short simulations at constant temperature from each of them to quantitatively determine their equilibrium populations. Thus, the ASM takes advantage of the broad sampling possible with GE algorithms but generally crosses entropic barriers more efficiently during the seeding simulations at low temperature. We show that only local equilibrium is necessary for ASM, so very short seeding simulations may be used. Moreover, the ASM may be used to recover equilibrium properties from existing datasets that failed to converge, and is well suited to running on modern computer clusters.
node2vec: Scalable Feature Learning for Networks
Grover, Aditya; Leskovec, Jure
2016-01-01
Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node’s network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks. PMID:27853626
NASA Astrophysics Data System (ADS)
Pilz, Tobias; Francke, Till; Bronstert, Axel
2017-08-01
The characteristics of a landscape pose essential factors for hydrological processes. Therefore, an adequate representation of the landscape of a catchment in hydrological models is vital. However, many of such models exist differing, amongst others, in spatial concept and discretisation. The latter constitutes an essential pre-processing step, for which many different algorithms along with numerous software implementations exist. In that context, existing solutions are often model specific, commercial, or depend on commercial back-end software, and allow only a limited or no workflow automation at all. Consequently, a new package for the scientific software and scripting environment R, called lumpR, was developed. lumpR employs an algorithm for hillslope-based landscape discretisation directed to large-scale application via a hierarchical multi-scale approach. The package addresses existing limitations as it is free and open source, easily extendible to other hydrological models, and the workflow can be fully automated. Moreover, it is user-friendly as the direct coupling to a GIS allows for immediate visual inspection and manual adjustment. Sufficient control is furthermore retained via parameter specification and the option to include expert knowledge. Conversely, completely automatic operation also allows for extensive analysis of aspects related to landscape discretisation. In a case study, the application of the package is presented. A sensitivity analysis of the most important discretisation parameters demonstrates its efficient workflow automation. Considering multiple streamflow metrics, the employed model proved reasonably robust to the discretisation parameters. However, parameters determining the sizes of subbasins and hillslopes proved to be more important than the others, including the number of representative hillslopes, the number of attributes employed for the lumping algorithm, and the number of sub-discretisations of the representative hillslopes.
Stream Kriging: Incremental and recursive ordinary Kriging over spatiotemporal data streams
NASA Astrophysics Data System (ADS)
Zhong, Xu; Kealy, Allison; Duckham, Matt
2016-05-01
Ordinary Kriging is widely used for geospatial interpolation and estimation. Due to the O (n3) time complexity of solving the system of linear equations, ordinary Kriging for a large set of source points is computationally intensive. Conducting real-time Kriging interpolation over continuously varying spatiotemporal data streams can therefore be especially challenging. This paper develops and tests two new strategies for improving the performance of an ordinary Kriging interpolator adapted to a stream-processing environment. These strategies rely on the expectation that, over time, source data points will frequently refer to the same spatial locations (for example, where static sensor nodes are generating repeated observations of a dynamic field). First, an incremental strategy improves efficiency in cases where a relatively small proportion of previously processed spatial locations are absent from the source points at any given iteration. Second, a recursive strategy improves efficiency in cases where there is substantial set overlap between the sets of spatial locations of source points at the current and previous iterations. These two strategies are evaluated in terms of their computational efficiency in comparison to ordinary Kriging algorithm. The results show that these two strategies can reduce the time taken to perform the interpolation by up to 90%, and approach average-case time complexity of O (n2) when most but not all source points refer to the same locations over time. By combining the approaches developed in this paper with existing heuristic ordinary Kriging algorithms, the conclusions indicate how further efficiency gains could potentially be accrued. The work ultimately contributes to the development of online ordinary Kriging interpolation algorithms, capable of real-time spatial interpolation with large streaming data sets.
Efficient algorithms for single-axis attitude estimation
NASA Technical Reports Server (NTRS)
Shuster, M. D.
1981-01-01
The computationally efficient algorithms determine attitude from the measurement of art lengths and dihedral angles. The dependence of these algorithms on the solution of trigonometric equations was reduced. Both single time and batch estimators are presented along with the covariance analysis of each algorithm.