Science.gov

Sample records for algorithm ga search

  1. Evolutionary pattern search algorithms

    SciTech Connect

    Hart, W.E.

    1995-09-19

    This paper defines a class of evolutionary algorithms called evolutionary pattern search algorithms (EPSAs) and analyzes their convergence properties. This class of algorithms is closely related to evolutionary programming, evolutionary strategie and real-coded genetic algorithms. EPSAs are self-adapting systems that modify the step size of the mutation operator in response to the success of previous optimization steps. The rule used to adapt the step size can be used to provide a stationary point convergence theory for EPSAs on any continuous function. This convergence theory is based on an extension of the convergence theory for generalized pattern search methods. An experimental analysis of the performance of EPSAs demonstrates that these algorithms can perform a level of global search that is comparable to that of canonical EAs. We also describe a stopping rule for EPSAs, which reliably terminated near stationary points in our experiments. This is the first stopping rule for any class of EAs that can terminate at a given distance from stationary points.

  2. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  3. Optimization of a genetic algorithm for searching molecular conformer space

    NASA Astrophysics Data System (ADS)

    Brain, Zoe E.; Addicoat, Matthew A.

    2011-11-01

    We present two sets of tunings that are broadly applicable to conformer searches of isolated molecules using a genetic algorithm (GA). In order to find the most efficient tunings for the GA, a second GA - a meta-genetic algorithm - was used to tune the first genetic algorithm to reliably find the already known a priori correct answer with minimum computational resources. It is shown that these tunings are appropriate for a variety of molecules with different characteristics, and most importantly that the tunings are independent of the underlying model chemistry but that the tunings for rigid and relaxed surfaces differ slightly. It is shown that for the problem of molecular conformational search, the most efficient GA actually reduces to an evolutionary algorithm.

  4. Statistical Evaluation of the Performance of Energy Surface Search Algorithms

    NASA Astrophysics Data System (ADS)

    Horoi, Mihai; Jackson, Koblar A.

    2001-03-01

    In the last few years several new energy surface search algorithms have been proposed, including Genetic Algorithms (GA), Basin-Hopping Monte Carlo, and a single parent GA (see I. Rata, et al., Phys. Rev. Lett. 85, 546(2000)). Each time a new algorithm was presented, the authors claimed better performance by finding lower minima for previously studied clusters. However, it was not clear if the better result was a consequence of a better algorithm or due to more patience in searching the configuration space. We have done a statistical evaluation of all these algorithms and find that the distribution of the number of search steps required to locate the global minimum from a random starting point is an exponential for each method, with the average equal to the variance. This behavior is a result of the random steps made by each method in searching the configuration space. Understanding the nature of the distribution allows the performance of the methods to be compared statistically and suggests possible improvements. We also show that in some cases (small clusters with Lennard-Jones interactions) a completely random search starting in a "small" box can be more efficient than any of the more complex algorithms.

  5. Searching Algorithm Using Bayesian Updates

    ERIC Educational Resources Information Center

    Caudle, Kyle

    2010-01-01

    In late October 1967, the USS Scorpion was lost at sea, somewhere between the Azores and Norfolk Virginia. Dr. Craven of the U.S. Navy's Special Projects Division is credited with using Bayesian Search Theory to locate the submarine. Bayesian Search Theory is a straightforward and interesting application of Bayes' theorem which involves searching…

  6. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  7. Genetic algorithms as global random search methods

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that that schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solution and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  8. Firefly Algorithm for Structural Search.

    PubMed

    Avendaño-Franco, Guillermo; Romero, Aldo H

    2016-07-12

    The problem of computational structure prediction of materials is approached using the firefly (FF) algorithm. Starting from the chemical composition and optionally using prior knowledge of similar structures, the FF method is able to predict not only known stable structures but also a variety of novel competitive metastable structures. This article focuses on the strengths and limitations of the algorithm as a multimodal global searcher. The algorithm has been implemented in software package PyChemia ( https://github.com/MaterialsDiscovery/PyChemia ), an open source python library for materials analysis. We present applications of the method to van der Waals clusters and crystal structures. The FF method is shown to be competitive when compared to other population-based global searchers. PMID:27232694

  9. Algorithm to search for genomic rearrangements

    NASA Astrophysics Data System (ADS)

    Nałecz-Charkiewicz, Katarzyna; Nowak, Robert

    2013-10-01

    The aim of this article is to discuss the issue of comparing nucleotide sequences in order to detect chromosomal rearrangements (for example, in the study of genomes of two cucumber varieties, Polish and Chinese). Two basic algorithms for detecting rearrangements has been described: Smith-Waterman algorithm, as well as a new method of searching genetic markers in combination with Knuth-Morris-Pratt algorithm. The computer program in client-server architecture was developed. The algorithms properties were examined on genomes Escherichia coli and Arabidopsis thaliana genomes, and are prepared to compare two cucumber varieties, Polish and Chinese. The results are promising and further works are planned.

  10. RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.

  11. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases. PMID:25298971

  12. Adaptive cuckoo search algorithm for unconstrained optimization.

    PubMed

    Ong, Pauline

    2014-01-01

    Modification of the intensification and diversification approaches in the recently developed cuckoo search algorithm (CSA) is performed. The alteration involves the implementation of adaptive step size adjustment strategy, and thus enabling faster convergence to the global optimal solutions. The feasibility of the proposed algorithm is validated against benchmark optimization functions, where the obtained results demonstrate a marked improvement over the standard CSA, in all the cases.

  13. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  14. A cuckoo search algorithm for multimodal optimization.

    PubMed

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration. PMID:25147850

  15. A cuckoo search algorithm for multimodal optimization.

    PubMed

    Cuevas, Erik; Reyna-Orta, Adolfo

    2014-01-01

    Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.

  16. Improving Search Algorithms by Using Intelligent Coordinates

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  17. Generalized Jaynes-Cummings model as a quantum search algorithm

    SciTech Connect

    Romanelli, A.

    2009-07-15

    We propose a continuous time quantum search algorithm using a generalization of the Jaynes-Cummings model. In this model the states of the atom are the elements among which the algorithm realizes the search, exciting resonances between the initial and the searched states. This algorithm behaves like Grover's algorithm; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time. In this frame, it is possible to reinterpret the usual Jaynes-Cummings model as a trivial case of the quantum search algorithm.

  18. THE QUASIPERIODIC AUTOMATED TRANSIT SEARCH ALGORITHM

    SciTech Connect

    Carter, Joshua A.; Agol, Eric

    2013-03-10

    We present a new algorithm for detecting transiting extrasolar planets in time-series photometry. The Quasiperiodic Automated Transit Search (QATS) algorithm relaxes the usual assumption of strictly periodic transits by permitting a variable, but bounded, interval between successive transits. We show that this method is capable of detecting transiting planets with significant transit timing variations without any loss of significance-{sup s}mearing{sup -}as would be incurred with traditional algorithms; however, this is at the cost of a slightly increased stochastic background. The approximate times of transit are standard products of the QATS search. Despite the increased flexibility, we show that QATS has a run-time complexity that is comparable to traditional search codes and is comparably easy to implement. QATS is applicable to data having a nearly uninterrupted, uniform cadence and is therefore well suited to the modern class of space-based transit searches (e.g., Kepler, CoRoT). Applications of QATS include transiting planets in dynamically active multi-planet systems and transiting planets in stellar binary systems.

  19. A novel complex valued cuckoo search algorithm.

    PubMed

    Zhou, Yongquan; Zheng, Hongqing

    2013-01-01

    To expand the information of nest individuals, the idea of complex-valued encoding is used in cuckoo search (PCS); the gene of individuals is denoted by plurality, so a diploid swarm is structured by a sequence plurality. The value of independent variables for objective function is determined by modules, and a sign of them is determined by angles. The position of nest is divided into two parts, namely, real part gene and imaginary gene. The updating relation of complex-valued swarm is presented. Six typical functions are tested. The results are compared with cuckoo search based on real-valued encoding; the usefulness of the proposed algorithm is verified.

  20. A new optimized GA-RBF neural network algorithm.

    PubMed

    Jia, Weikuan; Zhao, Dean; Shen, Tian; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid.

  1. A New Optimized GA-RBF Neural Network Algorithm

    PubMed Central

    Zhao, Dean; Su, Chunyang; Hu, Chanli; Zhao, Yuyan

    2014-01-01

    When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this problem, we propose a new optimized RBF neural network algorithm based on genetic algorithm (GA-RBF algorithm), which uses genetic algorithm to optimize the weights and structure of RBF neural network; it chooses new ways of hybrid encoding and optimizing simultaneously. Using the binary encoding encodes the number of the hidden layer's neurons and using real encoding encodes the connection weights. Hidden layer neurons number and connection weights are optimized simultaneously in the new algorithm. However, the connection weights optimization is not complete; we need to use least mean square (LMS) algorithm for further leaning, and finally get a new algorithm model. Using two UCI standard data sets to test the new algorithm, the results show that the new algorithm improves the operating efficiency in dealing with complex problems and also improves the recognition precision, which proves that the new algorithm is valid. PMID:25371666

  2. The mGA1.0: A common LISP implementation of a messy genetic algorithm

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Kerzic, Travis

    1990-01-01

    Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.

  3. Transitionless driving on adiabatic search algorithm

    SciTech Connect

    Oh, Sangchul; Kais, Sabre

    2014-12-14

    We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian, approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.

  4. Easy and hard testbeds for real-time search algorithms

    SciTech Connect

    Koenig, S.; Simmons, R.G.

    1996-12-31

    Although researchers have studied which factors influence the behavior of traditional search algorithms, currently not much is known about how domain properties influence the performance of real-time search algorithms. In this paper we demonstrate, both theoretically and experimentally, that Eulerian state spaces (a super set of undirected state spaces) are very easy for some existing real-time search algorithms to solve: even real-time search algorithms that can be intractable, in general, are efficient for Eulerian state spaces. Because traditional real-time search testbeds (such as the eight puzzle and gridworlds) are Eulerian, they cannot be used to distinguish between efficient and inefficient real-time search algorithms. It follows that one has to use non-Eulerian domains to demonstrate the general superiority of a given algorithm. To this end, we present two classes of hard-to-search state spaces and demonstrate the performance of various real-time search algorithms on them.

  5. Improving search algorithms by using intelligent coordinates

    NASA Astrophysics Data System (ADS)

    Wolpert, David; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent η is self-interested; it sets its variable to maximize its own function gη. Three factors govern such a distributed algorithm’s performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit all three factors by modifying a search algorithm’s exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based “player” engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  6. Optimized hyperspectral band selection using hybrid genetic algorithm and gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie

    2015-12-01

    The serious information redundancy in hyperspectral images (HIs) cannot contribute to the data analysis accuracy, instead it require expensive computational resources. Consequently, to identify the most useful and valuable information from the HIs, thereby improve the accuracy of data analysis, this paper proposed a novel hyperspectral band selection method using the hybrid genetic algorithm and gravitational search algorithm (GA-GSA). In the proposed method, the GA-GSA is mapped to the binary space at first. Then, the accuracy of the support vector machine (SVM) classifier and the number of selected spectral bands are utilized to measure the discriminative capability of the band subset. Finally, the band subset with the smallest number of spectral bands as well as covers the most useful and valuable information is obtained. To verify the effectiveness of the proposed method, studies conducted on an AVIRIS image against two recently proposed state-of-the-art GSA variants are presented. The experimental results revealed the superiority of the proposed method and indicated that the method can indeed considerably reduce data storage costs and efficiently identify the band subset with stable and high classification precision.

  7. A parallel algorithm for random searches

    NASA Astrophysics Data System (ADS)

    Wosniack, M. E.; Raposo, E. P.; Viswanathan, G. M.; da Luz, M. G. E.

    2015-11-01

    We discuss a parallelization procedure for a two-dimensional random search of a single individual, a typical sequential process. To assure the same features of the sequential random search in the parallel version, we analyze the former spatial patterns of the encountered targets for different search strategies and densities of homogeneously distributed targets. We identify a lognormal tendency for the distribution of distances between consecutively detected targets. Then, by assigning the distinct mean and standard deviation of this distribution for each corresponding configuration in the parallel simulations (constituted by parallel random walkers), we are able to recover important statistical properties, e.g., the target detection efficiency, of the original problem. The proposed parallel approach presents a speedup of nearly one order of magnitude compared with the sequential implementation. This algorithm can be easily adapted to different instances, as searches in three dimensions. Its possible range of applicability covers problems in areas as diverse as automated computer searchers in high-capacity databases and animal foraging.

  8. Differential Search Algorithm Based Edge Detection

    NASA Astrophysics Data System (ADS)

    Gunen, M. A.; Civicioglu, P.; Beşdok, E.

    2016-06-01

    In this paper, a new method has been presented for the extraction of edge information by using Differential Search Optimization Algorithm. The proposed method is based on using a new heuristic image thresholding method for edge detection. The success of the proposed method has been examined on fusion of two remote sensed images. The applicability of the proposed method on edge detection and image fusion problems have been analysed in detail and the empirical results exposed that the proposed method is useful for solving the mentioned problems.

  9. Nonparametric algorithms for the search of signals

    NASA Technical Reports Server (NTRS)

    Myshenkova, T. S.

    1978-01-01

    Two algorithms for the search of signals in noise are constructed. Two independent measurable properties of a discreet time-dependent stochastic process F are described. The functions A sub K and B sub K fully satisfy conditions for the application of a test based on Spearman's rank correlation coefficient. A statistic to which the one-sided signal test can be applied is constructed under sufficiently natural assumptions about the noise process. The constructed statistics are simplified. Processing results from calculations of statistics constructed for concrete processes are presented.

  10. A hybrid search algorithm for swarm robots searching in an unknown environment.

    PubMed

    Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao

    2014-01-01

    This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency.

  11. Alien Genetic Algorithm for Exploration of Search Space

    NASA Astrophysics Data System (ADS)

    Patel, Narendra; Padhiyar, Nitin

    2010-10-01

    Genetic Algorithm (GA) is a widely accepted population based stochastic optimization technique used for single and multi objective optimization problems. Various versions of modifications in GA have been proposed in last three decades mainly addressing two issues, namely increasing convergence rate and increasing probability of global minima. While both these. While addressing the first issue, GA tends to converge to a local optima and addressing the second issue corresponds the large computational efforts. Thus, to reduce the contradictory effects of these two aspects, we propose a modification in GA by adding an alien member in the population at every generation. Addition of an Alien member in the current population at every generation increases the probability of obtaining global minima at the same time maintaining higher convergence rate. With two test cases, we have demonstrated the efficacy of the proposed GA by comparing with the conventional GA.

  12. A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search.

    PubMed

    Villagra, Andrea; Alba, Enrique; Leguizamón, Guillermo

    2016-01-01

    This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology.

  13. A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search

    PubMed Central

    Alba, Enrique; Leguizamón, Guillermo

    2016-01-01

    This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology. PMID:27403153

  14. A Methodology for the Hybridization Based in Active Components: The Case of cGA and Scatter Search.

    PubMed

    Villagra, Andrea; Alba, Enrique; Leguizamón, Guillermo

    2016-01-01

    This work presents the results of a new methodology for hybridizing metaheuristics. By first locating the active components (parts) of one algorithm and then inserting them into second one, we can build efficient and accurate optimization, search, and learning algorithms. This gives a concrete way of constructing new techniques that contrasts the spread ad hoc way of hybridizing. In this paper, the enhanced algorithm is a Cellular Genetic Algorithm (cGA) which has been successfully used in the past to find solutions to such hard optimization problems. In order to extend and corroborate the use of active components as an emerging hybridization methodology, we propose here the use of active components taken from Scatter Search (SS) to improve cGA. The results obtained over a varied set of benchmarks are highly satisfactory in efficacy and efficiency when compared with a standard cGA. Moreover, the proposed hybrid approach (i.e., cGA+SS) has shown encouraging results with regard to earlier applications of our methodology. PMID:27403153

  15. Harmony Search Algorithm for Word Sense Disambiguation

    PubMed Central

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used. PMID:26422368

  16. Predictive search algorithm for vector quantization of images

    NASA Astrophysics Data System (ADS)

    Kuo, Chung-Ming; Hsieh, Chaur-Heh; Weng, Shiuh-Ku

    2002-05-01

    We present a fast predictive search algorithm for vectorquantization (VQ) based on a wavelet transform and weighted average Kalman filter (WAKF). With the proposed algorithm, the minimum distortion code word can be found by searching only a portion of the wavelet transformed code book. If the minimum distortion code word found falls within a predicted search area obtained by the WAKF algorithm, the relative address that is shorter than the absolute address for a full search range is sent to the decoder. Simulation results indicate that the proposed algorithm achieves a significant reduction in computations and about a 30% bit-rate reduction, as compared to conventional full search VQs. In addition, the reconstructed quality is equivalent to that of the full search algorithm.

  17. An efficient cuckoo search algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Ong, Pauline; Zainuddin, Zarita

    2013-04-01

    Cuckoo search algorithm which reproduces the breeding strategy of the best known brood parasitic bird, the cuckoos has demonstrated its superiority in obtaining the global solution for numerical optimization problems. However, the involvement of fixed step approach in its exploration and exploitation behavior might slow down the search process considerably. In this regards, an improved cuckoo search algorithm with adaptive step size adjustment is introduced and its feasibility on a variety of benchmarks is validated. The obtained results show that the proposed scheme outperforms the standard cuckoo search algorithm in terms of convergence characteristic while preserving the fascinating features of the original method.

  18. An Exact Quantum Search Algorithm with Arbitrary Database

    NASA Astrophysics Data System (ADS)

    Liu, Yang

    2014-08-01

    In standard Grover's algorithm for quantum searching, the probability of finding a marked state is not exactly 1, and some modified versions of Grover's algorithm that search a marked state from an evenly distributed database with full successful rate have been presented. In this article, we present a generalized quantum search algorithm that searches M marked states from an arbitrary distributed N-item quantum database with a zero theoretical failure rate, where N is not necessary to be the power of 2. We analyze the general properties of our search algorithm, we find that our algorithm has periodicity with a period of 2 J + 1, and it is effective with certainty for J + (2 J + 1) m times of iteration, where m is an arbitrary nonnegative number.

  19. Tactical Synthesis Of Efficient Global Search Algorithms

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2009-01-01

    Algorithm synthesis transforms a formal specification into an efficient algorithm to solve a problem. Algorithm synthesis in Specware combines the formal specification of a problem with a high-level algorithm strategy. To derive an efficient algorithm, a developer must define operators that refine the algorithm by combining the generic operators in the algorithm with the details of the problem specification. This derivation requires skill and a deep understanding of the problem and the algorithmic strategy. In this paper we introduce two tactics to ease this process. The tactics serve a similar purpose to tactics used for determining indefinite integrals in calculus, that is suggesting possible ways to attack the problem.

  20. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization

    PubMed Central

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-01-01

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors. PMID:25897500

  1. Genetic-Algorithm Tool For Search And Optimization

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven

    1995-01-01

    SPLICER computer program used to solve search and optimization problems. Genetic algorithms adaptive search procedures (i.e., problem-solving methods) based loosely on processes of natural selection and Darwinian "survival of fittest." Algorithms apply genetically inspired operators to populations of potential solutions in iterative fashion, creating new populations while searching for optimal or nearly optimal solution to problem at hand. Written in Think C.

  2. Interior search algorithm (ISA): a novel approach for global optimization.

    PubMed

    Gandomi, Amir H

    2014-07-01

    This paper presents the interior search algorithm (ISA) as a novel method for solving optimization tasks. The proposed ISA is inspired by interior design and decoration. The algorithm is different from other metaheuristic algorithms and provides new insight for global optimization. The proposed method is verified using some benchmark mathematical and engineering problems commonly used in the area of optimization. ISA results are further compared with well-known optimization algorithms. The results show that the ISA is efficiently capable of solving optimization problems. The proposed algorithm can outperform the other well-known algorithms. Further, the proposed algorithm is very simple and it only has one parameter to tune.

  3. A hybrid search algorithm for swarm robots searching in an unknown environment.

    PubMed

    Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao

    2014-01-01

    This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency. PMID:25386855

  4. A Hybrid Search Algorithm for Swarm Robots Searching in an Unknown Environment

    PubMed Central

    Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao

    2014-01-01

    This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency. PMID:25386855

  5. Calibration of visual model for space manipulator with a hybrid LM-GA algorithm

    NASA Astrophysics Data System (ADS)

    Jiang, Wensong; Wang, Zhongyu

    2016-01-01

    A hybrid LM-GA algorithm is proposed to calibrate the camera system of space manipulator to improve its locational accuracy. This algorithm can dynamically fuse the Levenberg-Marqurdt (LM) algorithm and Genetic Algorithm (GA) together to minimize the error of nonlinear camera model. LM algorithm is called to optimize the initial camera parameters that are generated by genetic process previously. Iteration should be stopped if the optimized camera parameters meet the accuracy requirements. Otherwise, new populations are generated again by GA and optimized afresh by LM algorithm until the optimal solutions meet the accuracy requirements. A novel measuring machine of space manipulator is designed to on-orbit dynamic simulation and precision test. The camera system of space manipulator, calibrated by hybrid LM-GA algorithm, is used for locational precision test in this measuring instrument. The experimental results show that the mean composite errors are 0.074 mm for hybrid LM-GA camera calibration model, 1.098 mm for LM camera calibration model, and 1.202 mm for GA camera calibration model. Furthermore, the composite standard deviations are 0.103 mm for the hybrid LM-GA camera calibration model, 1.227 mm for LM camera calibration model, and 1.351 mm for GA camera calibration model. The accuracy of hybrid LM-GA camera calibration model is more than 10 times higher than that of other two methods. All in all, the hybrid LM-GA camera calibration model is superior to both the LM camera calibration model and GA camera calibration model.

  6. Optimization of hybrid laminated composites using the multi-objective gravitational search algorithm (MOGSA)

    NASA Astrophysics Data System (ADS)

    Hemmatian, Hossein; Fereidoon, Abdolhossein; Assareh, Ehsanolah

    2014-09-01

    The multi-objective gravitational search algorithm (MOGSA) technique is applied to hybrid laminates to achieve minimum weight and cost. The investigated laminate is made of glass-epoxy and carbon-epoxy plies to combine the economical attributes of the first with the light weight and high-stiffness properties of the second in order to make the trade-off between the cost and weight as the objective functions. The first natural flexural frequency was considered as a constraint. The results obtained using the MOGSA, including the Pareto set, optimum stacking sequences and number of plies made of either glass or carbon fibres, were compared with those using the genetic algorithm (GA) and ant colony optimization (ACO) reported in the literature. The comparisons confirmed the advantages of hybridization and showed that the MOGSA outperformed the GA and ACO in terms of the functions' value and constraint accuracy.

  7. Decoherence in optimized quantum random-walk search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Chao; Bao, Wan-Su; Wang, Xiang; Fu, Xiang-Qun

    2015-08-01

    This paper investigates the effects of decoherence generated by broken-link-type noise in the hypercube on an optimized quantum random-walk search algorithm. When the hypercube occurs with random broken links, the optimized quantum random-walk search algorithm with decoherence is depicted through defining the shift operator which includes the possibility of broken links. For a given database size, we obtain the maximum success rate of the algorithm and the required number of iterations through numerical simulations and analysis when the algorithm is in the presence of decoherence. Then the computational complexity of the algorithm with decoherence is obtained. The results show that the ultimate effect of broken-link-type decoherence on the optimized quantum random-walk search algorithm is negative. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002).

  8. Pattern Search Algorithms for Bound Constrained Minimization

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1996-01-01

    We present a convergence theory for pattern search methods for solving bound constrained nonlinear programs. The analysis relies on the abstract structure of pattern search methods and an understanding of how the pattern interacts with the bound constraints. This analysis makes it possible to develop pattern search methods for bound constrained problems while only slightly restricting the flexibility present in pattern search methods for unconstrained problems. We prove global convergence despite the fact that pattern search methods do not have explicit information concerning the gradient and its projection onto the feasible region and consequently are unable to enforce explicitly a notion of sufficient feasible decrease.

  9. Genetic algorithms and the search for viable string vacua

    NASA Astrophysics Data System (ADS)

    Abel, Steven; Rizos, John

    2014-08-01

    Genetic Algorithms are introduced as a search method for finding string vacua with viable phenomenological properties. It is shown, by testing them against a class of Free Fermionic models, that they are orders of magnitude more efficient than a randomised search. As an example, three generation, exophobic, Pati-Salam models with a top Yukawa occur once in every 1010 models, and yet a Genetic Algorithm can find them after constructing only 105 examples. Such non-deterministic search methods may be the only means to search for Standard Model string vacua with detailed phenomenological requirements.

  10. Effective Memetic Algorithms for VLSI design = Genetic Algorithms + local search + multi-level clustering.

    PubMed

    Areibi, Shawki; Yang, Zhen

    2004-01-01

    Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem. PMID:15355604

  11. Effective Memetic Algorithms for VLSI design = Genetic Algorithms + local search + multi-level clustering.

    PubMed

    Areibi, Shawki; Yang, Zhen

    2004-01-01

    Combining global and local search is a strategy used by many successful hybrid optimization approaches. Memetic Algorithms (MAs) are Evolutionary Algorithms (EAs) that apply some sort of local search to further improve the fitness of individuals in the population. Memetic Algorithms have been shown to be very effective in solving many hard combinatorial optimization problems. This paper provides a forum for identifying and exploring the key issues that affect the design and application of Memetic Algorithms. The approach combines a hierarchical design technique, Genetic Algorithms, constructive techniques and advanced local search to solve VLSI circuit layout in the form of circuit partitioning and placement. Results obtained indicate that Memetic Algorithms based on local search, clustering and good initial solutions improve solution quality on average by 35% for the VLSI circuit partitioning problem and 54% for the VLSI standard cell placement problem.

  12. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  13. Multi-directional search: A direct search algorithm for parallel machines

    SciTech Connect

    Torczon, V.J.

    1989-01-01

    In recent years there has been a great deal in the development of optimization algorithms which exploit the computational power of parallel computer architectures. The author has developed a new direct search algorithm, which he calls multi-directional search, that is ideally suited for parallel computation. His algorithm belongs to the class of direct search methods, a class of optimization algorithms which neither compute nor approximate any derivatives of the objective function. His work, in fact, was inspired by the simplex method of Spendley, Hext, and Himsworth, and the simplex method of Nelder and Mead. The multi-directional search algorithm is inherently parallel. The basic idea of the algorithm is to perform concurrent searches in multiple directions. These searches are free of any interdependencies, so the information required can be computed in parallel. A central result of his work is the convergence analysis for his algorithm. By requiring only that the function be continuously differentiable over a bounded level set, he can prove that a subsequence of the points generated by the multi-directional search algorithm converges to a stationary point of the objective function. This is of great interest since he knows of few convergence results for practical direct search algorithms. He also presents numerical results indicating that the multidirectional search algorithm is robust, even in the presence of noise. His results include comparisons with the Nelder-Mead simplex algorithm, the method of steepest descent, and a quasi-Newton method. One surprising conclusion of his numerical tests is that the Nelder-Mead simplex algorithm is not robust. He closes with some comments about future directions of research.

  14. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. PMID:25779670

  15. LAHS: A novel harmony search algorithm based on learning automata

    NASA Astrophysics Data System (ADS)

    Enayatifar, Rasul; Yousefi, Moslem; Abdullah, Abdul Hanan; Darus, Amer Nordin

    2013-12-01

    This study presents a learning automata-based harmony search (LAHS) for unconstrained optimization of continuous problems. The harmony search (HS) algorithm performance strongly depends on the fine tuning of its parameters, including the harmony consideration rate (HMCR), pitch adjustment rate (PAR) and bandwidth (bw). Inspired by the spur-in-time responses in the musical improvisation process, learning capabilities are employed in the HS to select these parameters based on spontaneous reactions. An extensive numerical investigation is conducted on several well-known test functions, and the results are compared with the HS algorithm and its prominent variants, including the improved harmony search (IHS), global-best harmony search (GHS) and self-adaptive global-best harmony search (SGHS). The numerical results indicate that the LAHS is more efficient in finding optimum solutions and outperforms the existing HS algorithm variants.

  16. An improved harmony search algorithm with dynamically varying bandwidth

    NASA Astrophysics Data System (ADS)

    Kalivarapu, J.; Jain, S.; Bag, S.

    2016-07-01

    The present work demonstrates a new variant of the harmony search (HS) algorithm where bandwidth (BW) is one of the deciding factors for the time complexity and the performance of the algorithm. The BW needs to have both explorative and exploitative characteristics. The ideology is to use a large BW to search in the full domain and to adjust the BW dynamically closer to the optimal solution. After trying a series of approaches, a methodology inspired by the functioning of a low-pass filter showed satisfactory results. This approach was implemented in the self-adaptive improved harmony search (SIHS) algorithm and tested on several benchmark functions. Compared to the existing HS algorithm and its variants, SIHS showed better performance on most of the test functions. Thereafter, the algorithm was applied to geometric parameter optimization of a friction stir welding tool.

  17. A quantum search algorithm for future spacecraft attitude determination

    NASA Astrophysics Data System (ADS)

    Tsai, Jack; Hsiao, Fu-Yuen; Li, Yi-Ju; Shen, Jen-Fu

    2011-04-01

    In this paper we study the potential application of a quantum search algorithm to spacecraft navigation with a focus on attitude determination. Traditionally, attitude determination is achieved by recognizing the relative position/attitude with respect to the background stars using sun sensors, earth limb sensors, or star trackers. However, due to the massive celestial database, star pattern recognition is a complicated and power consuming job. We propose a new method of attitude determination by applying the quantum search algorithm to the search for a specific star or star pattern. The quantum search algorithm, proposed by Grover in 1996, could search the specific data out of an unstructured database containing a number N of data in only O(√{N}) steps, compared to an average of N/2 steps in conventional computers. As a result, by taking advantage of matching a particular star in a vast celestial database in very few steps, we derive a new algorithm for attitude determination, collaborated with Grover's search algorithm and star catalogues of apparent magnitude and absorption spectra. Numerical simulations and examples are also provided to demonstrate the feasibility and robustness of our new algorithm.

  18. Fuzzy ruling between core porosity and petrophysical logs: Subtractive clustering vs. genetic algorithm-pattern search

    NASA Astrophysics Data System (ADS)

    Bagheripour, Parisa; Asoodeh, Mojtaba

    2013-12-01

    Porosity, the void portion of reservoir rocks, determines the volume of hydrocarbon accumulation and has a great control on assessment and development of hydrocarbon reservoirs. Accurate determination of porosity from core analysis is highly cost, time, and labor intensive. Therefore, the mission of finding an accurate, fast and cheap way of determining porosity is unavoidable. On the other hand, conventional well log data, available in almost all wells contain invaluable implicit information about the porosity. Therefore, an intelligent system can explicate this information. Fuzzy logic is a powerful tool for handling geosciences problem which is associated with uncertainty. However, determination of the best fuzzy formulation is still an issue. This study purposes an improved strategy, called hybrid genetic algorithm-pattern search (GA-PS) technique, against the widely held subtractive clustering (SC) method for setting up fuzzy rules between core porosity and petrophysical logs. Hybrid GA-PS technique is capable of extracting optimal parameters for fuzzy clusters (membership functions) which consequently results in the best fuzzy formulation. Results indicate that GA-PS technique manipulates both mean and variance of Gaussian membership functions contrary to SC that only has a control on mean of Gaussian membership functions. A comparison between hybrid GA-PS technique and SC method confirmed the superiority of GA-PS technique in setting up fuzzy rules. The proposed strategy was successfully applied to one of the Iranian carbonate reservoir rocks.

  19. A hybrid monkey search algorithm for clustering analysis.

    PubMed

    Chen, Xin; Zhou, Yongquan; Luo, Qifang

    2014-01-01

    Clustering is a popular data analysis and data mining technique. The k-means clustering algorithm is one of the most commonly used methods. However, it highly depends on the initial solution and is easy to fall into local optimum solution. In view of the disadvantages of the k-means method, this paper proposed a hybrid monkey algorithm based on search operator of artificial bee colony algorithm for clustering analysis and experiment on synthetic and real life datasets to show that the algorithm has a good performance than that of the basic monkey algorithm for clustering analysis.

  20. Model Specification Searches Using Ant Colony Optimization Algorithms

    ERIC Educational Resources Information Center

    Marcoulides, George A.; Drezner, Zvi

    2003-01-01

    Ant colony optimization is a recently proposed heuristic procedure inspired by the behavior of real ants. This article applies the procedure to model specification searches in structural equation modeling and reports the results. The results demonstrate the capabilities of ant colony optimization algorithms for conducting automated searches.

  1. Detecting Outliers in Factor Analysis Using the Forward Search Algorithm

    ERIC Educational Resources Information Center

    Mavridis, Dimitris; Moustaki, Irini

    2008-01-01

    In this article we extend and implement the forward search algorithm for identifying atypical subjects/observations in factor analysis models. The forward search has been mainly developed for detecting aberrant observations in regression models (Atkinson, 1994) and in multivariate methods such as cluster and discriminant analysis (Atkinson, Riani,…

  2. A Functional Programming Approach to AI Search Algorithms

    ERIC Educational Resources Information Center

    Panovics, Janos

    2012-01-01

    The theory and practice of search algorithms related to state-space represented problems form the major part of the introductory course of Artificial Intelligence at most of the universities and colleges offering a degree in the area of computer science. Students usually meet these algorithms only in some imperative or object-oriented language…

  3. A Practical Stemming Algorithm for Online Search Assistance.

    ERIC Educational Resources Information Center

    Ulmschneider, John E.; Doszkocs, Tamas

    1983-01-01

    Describes a two-phase stemming algorithm which consists of word root identification and automatic selection of word variants starting with same word root from inverted file. Use of algorithm in book catalog file is discussed. Ten references and example of subject search are appended. (EJS)

  4. A New Approximate Chimera Donor Cell Search Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Nixon, David (Technical Monitor)

    1998-01-01

    The objectives of this study were to develop chimera-based full potential methodology which is compatible with overflow (Euler/Navier-Stokes) chimera flow solver and to develop a fast donor cell search algorithm that is compatible with the chimera full potential approach. Results of this work included presenting a new donor cell search algorithm suitable for use with a chimera-based full potential solver. This algorithm was found to be extremely fast and simple producing donor cells as fast as 60,000 per second.

  5. Combined string searching algorithm based on knuth-morris- pratt and boyer-moore algorithms

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Chernigovskiy, A. S.; Tsareva, E. A.; Brezitskaya, V. V.; Nikiforov, A. Yu; Smirnov, N. A.

    2016-04-01

    The string searching task can be classified as a classic information processing task. Users either encounter the solution of this task while working with text processors or browsers, employing standard built-in tools, or this task is solved unseen by the users, while they are working with various computer programmes. Nowadays there are many algorithms for solving the string searching problem. The main criterion of these algorithms’ effectiveness is searching speed. The larger the shift of the pattern relative to the string in case of pattern and string characters’ mismatch is, the higher is the algorithm running speed. This article offers a combined algorithm, which has been developed on the basis of well-known Knuth-Morris-Pratt and Boyer-Moore string searching algorithms. These algorithms are based on two different basic principles of pattern matching. Knuth-Morris-Pratt algorithm is based upon forward pattern matching and Boyer-Moore is based upon backward pattern matching. Having united these two algorithms, the combined algorithm allows acquiring the larger shift in case of pattern and string characters’ mismatch. The article provides an example, which illustrates the results of Boyer-Moore and Knuth-Morris- Pratt algorithms and combined algorithm’s work and shows advantage of the latter in solving string searching problem.

  6. An implementation of differential search algorithm (DSA) for inversion of surface wave data

    NASA Astrophysics Data System (ADS)

    Song, Xianhai; Li, Lei; Zhang, Xueqiang; Shi, Xinchun; Huang, Jianquan; Cai, Jianchao; Jin, Si; Ding, Jianping

    2014-12-01

    Surface wave dispersion analysis is widely used in geophysics to infer near-surface shear (S)-wave velocity profiles for a wide variety of applications. However, inversion of surface wave data is challenging for most local-search methods due to its high nonlinearity and to its multimodality. In this work, we proposed and implemented a new Rayleigh wave dispersion curve inversion scheme based on differential search algorithm (DSA), one of recently developed swarm intelligence-based algorithms. DSA is inspired from seasonal migration behavior of species of the living beings throughout the year for solving highly nonlinear, multivariable, and multimodal optimization problems. The proposed inverse procedure is applied to nonlinear inversion of fundamental-mode Rayleigh wave dispersion curves for near-surface S-wave velocity profiles. To evaluate calculation efficiency and stability of DSA, four noise-free and four noisy synthetic data sets are firstly inverted. Then, the performance of DSA is compared with that of genetic algorithms (GA) by two noise-free synthetic data sets. Finally, a real-world example from a waste disposal site in NE Italy is inverted to examine the applicability and robustness of the proposed approach on surface wave data. Furthermore, the performance of DSA is compared against that of GA by real data to further evaluate scores of the inverse procedure described here. Simulation results from both synthetic and actual field data demonstrate that differential search algorithm (DSA) applied to nonlinear inversion of surface wave data should be considered good not only in terms of the accuracy but also in terms of the convergence speed. The great advantages of DSA are that the algorithm is simple, robust and easy to implement. Also there are fewer control parameters to tune.

  7. Parameter identification using a creeping-random-search algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, R. V.

    1971-01-01

    A creeping-random-search algorithm is applied to different types of problems in the field of parameter identification. The studies are intended to demonstrate that a random-search algorithm can be applied successfully to these various problems, which often cannot be handled by conventional deterministic methods, and, also, to introduce methods that speed convergence to an extremal of the problem under investigation. Six two-parameter identification problems with analytic solutions are solved, and two application problems are discussed in some detail. Results of the study show that a modified version of the basic creeping-random-search algorithm chosen does speed convergence in comparison with the unmodified version. The results also show that the algorithm can successfully solve problems that contain limits on state or control variables, inequality constraints (both independent and dependent, and linear and nonlinear), or stochastic models.

  8. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively. PMID:26391486

  9. Optimal fractional delay-IIR filter design using cuckoo search algorithm.

    PubMed

    Kumar, Manjeet; Rawat, Tarun Kumar

    2015-11-01

    This paper applied a novel global meta-heuristic optimization algorithm, cuckoo search algorithm (CSA) to determine optimal coefficients of a fractional delay-infinite impulse response (FD-IIR) filter and trying to meet the ideal frequency response characteristics. Since fractional delay-IIR filter design is a multi-modal optimization problem, it cannot be computed efficiently using conventional gradient based optimization techniques. A weighted least square (WLS) based fitness function is used to improve the performance to a great extent. FD-IIR filters of different orders have been designed using the CSA. The simulation results of the proposed CSA based approach have been compared to those of well accepted evolutionary algorithms like Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The performance of the CSA based FD-IIR filter is superior to those obtained by GA and PSO. The simulation and statistical results affirm that the proposed approach using CSA outperforms GA and PSO, not only in the convergence rate but also in optimal performance of the designed FD-IIR filter (i.e., smaller magnitude error, smaller phase error, higher percentage improvement in magnitude and phase error, fast convergence rate). The absolute magnitude and phase error obtained for the designed 5th order FD-IIR filter are as low as 0.0037 and 0.0046, respectively. The percentage improvement in magnitude error for CSA based 5th order FD-IIR design with respect to GA and PSO are 80.93% and 74.83% respectively, and phase error are 76.04% and 71.25%, respectively.

  10. A parallelization of the row-searching algorithm

    NASA Astrophysics Data System (ADS)

    Yaici, Malika; Khaled, Hayet; Khaled, Zakia; Bentahar, Athmane

    2012-11-01

    The problem dealt in this paper concerns the parallelization of the row-searching algorithm which allows the search for linearly dependant rows on a given matrix and its implementation on MPI (Message Passing Interface) environment. This algorithm is largely used in control theory and more specifically in solving the famous diophantine equation. An introduction to the diophantine equation is presented, then two parallelization approaches of the algorithm are detailed. The first distributes a set of rows on processes (processors) and the second makes a distribution per blocks. The sequential algorithm and its two parallel forms are implemented using MPI routines, then modelled using UML (Unified Modelling Language) and finally evaluated using algorithmic complexity.

  11. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  12. Modeling human cancer-related regulatory modules by GA-RNN hybrid algorithms

    PubMed Central

    Chiang, Jung-Hsien; Chao, Shih-Yi

    2007-01-01

    Background Modeling cancer-related regulatory modules from gene expression profiling of cancer tissues is expected to contribute to our understanding of cancer biology as well as developments of new diagnose and therapies. Several mathematical models have been used to explore the phenomena of transcriptional regulatory mechanisms in Saccharomyces cerevisiae. However, the contemplating on controlling of feed-forward and feedback loops in transcriptional regulatory mechanisms is not resolved adequately in Saccharomyces cerevisiae, nor is in human cancer cells. Results In this study, we introduce a Genetic Algorithm-Recurrent Neural Network (GA-RNN) hybrid method for finding feed-forward regulated genes when given some transcription factors to construct cancer-related regulatory modules in human cancer microarray data. This hybrid approach focuses on the construction of various kinds of regulatory modules, that is, Recurrent Neural Network has the capability of controlling feed-forward and feedback loops in regulatory modules and Genetic Algorithms provide the ability of global searching of common regulated genes. This approach unravels new feed-forward connections in regulatory models by modified multi-layer RNN architectures. We also validate our approach by demonstrating that the connections in our cancer-related regulatory modules have been most identified and verified by previously-published biological documents. Conclusion The major contribution provided by this approach is regarding the chain influences upon a set of genes sequentially. In addition, this inverse modeling correctly identifies known oncogenes and their interaction genes in a purely data-driven way. PMID:17359522

  13. Fast search algorithms for computational protein design.

    PubMed

    Traoré, Seydou; Roberts, Kyle E; Allouche, David; Donald, Bruce R; André, Isabelle; Schiex, Thomas; Barbe, Sophie

    2016-05-01

    One of the main challenges in computational protein design (CPD) is the huge size of the protein sequence and conformational space that has to be computationally explored. Recently, we showed that state-of-the-art combinatorial optimization technologies based on Cost Function Network (CFN) processing allow speeding up provable rigid backbone protein design methods by several orders of magnitudes. Building up on this, we improved and injected CFN technology into the well-established CPD package Osprey to allow all Osprey CPD algorithms to benefit from associated speedups. Because Osprey fundamentally relies on the ability of A* to produce conformations in increasing order of energy, we defined new A* strategies combining CFN lower bounds, with new side-chain positioning-based branching scheme. Beyond the speedups obtained in the new A*-CFN combination, this novel branching scheme enables a much faster enumeration of suboptimal sequences, far beyond what is reachable without it. Together with the immediate and important speedups provided by CFN technology, these developments directly benefit to all the algorithms that previously relied on the DEE/ A* combination inside Osprey* and make it possible to solve larger CPD problems with provable algorithms.

  14. Fast search algorithms for computational protein design.

    PubMed

    Traoré, Seydou; Roberts, Kyle E; Allouche, David; Donald, Bruce R; André, Isabelle; Schiex, Thomas; Barbe, Sophie

    2016-05-01

    One of the main challenges in computational protein design (CPD) is the huge size of the protein sequence and conformational space that has to be computationally explored. Recently, we showed that state-of-the-art combinatorial optimization technologies based on Cost Function Network (CFN) processing allow speeding up provable rigid backbone protein design methods by several orders of magnitudes. Building up on this, we improved and injected CFN technology into the well-established CPD package Osprey to allow all Osprey CPD algorithms to benefit from associated speedups. Because Osprey fundamentally relies on the ability of A* to produce conformations in increasing order of energy, we defined new A* strategies combining CFN lower bounds, with new side-chain positioning-based branching scheme. Beyond the speedups obtained in the new A*-CFN combination, this novel branching scheme enables a much faster enumeration of suboptimal sequences, far beyond what is reachable without it. Together with the immediate and important speedups provided by CFN technology, these developments directly benefit to all the algorithms that previously relied on the DEE/ A* combination inside Osprey* and make it possible to solve larger CPD problems with provable algorithms. PMID:26833706

  15. Private algorithms for the protected in social network search.

    PubMed

    Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory

    2016-01-26

    Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets.

  16. Experimental implementation of Grover's search algorithm with neutral atom qubits

    NASA Astrophysics Data System (ADS)

    Sun, Yuan; Lichtman, Martin; Baker, Kevin; Saffman, Mark

    2016-05-01

    Grover's algorithm for searching an unsorted data base provides a provable speedup over the best possible classical search and is therefore a test bed for demonstrating the power of quantum computation. The algorithm has been demonstrated with NMR, trapped ion, photonic, and superconducting hardware, but only with two qubits encoding a four element database. We report on progress towards experimental demonstration of Grover's algorithm using two and three neutral atom qubits encoding a database with up to eight elements. Our approach uses a Rydberg blockade Ck NOT gate for efficient implementation of the Grover iterations. Quantum Monte Carlo simulations of the algorithm performance that account for gate errors and decoherence rates are compared with experimental results. Work supported by the IARPA MQCO program.

  17. Comparative evaluation of tandem MS search algorithms using a target-decoy search strategy.

    PubMed

    Balgley, Brian M; Laudeman, Tom; Yang, Li; Song, Tao; Lee, Cheng S

    2007-09-01

    Peptide identification of tandem mass spectra by a variety of available search algorithms forms the foundation for much of modern day mass spectrometry-based proteomics. Despite the critical importance of proper evaluation and interpretation of the results generated by these algorithms there is still little consistency in their application or understanding of their similarities and differences. A survey was conducted of four tandem mass spectrometry peptide identification search algorithms, including Mascot, Open Mass Spectrometry Search Algorithm, Sequest, and X! Tandem. The same input data, search parameters, and sequence library were used for the searches. Comparisons were based on commonly used scoring methodologies for each algorithm and on the results of a target-decoy approach to sequence library searching. The results indicated that there is little difference in the output of the algorithms so long as consistent scoring procedures are applied. The results showed that some commonly used scoring procedures may lead to excessive false discovery rates. Finally an alternative method for the determination of an optimal cutoff threshold is proposed.

  18. Adiabatic quantum algorithm for search engine ranking.

    PubMed

    Garnerone, Silvano; Zanardi, Paolo; Lidar, Daniel A

    2012-06-01

    We propose an adiabatic quantum algorithm for generating a quantum pure state encoding of the PageRank vector, the most widely used tool in ranking the relative importance of internet pages. We present extensive numerical simulations which provide evidence that this algorithm can prepare the quantum PageRank state in a time which, on average, scales polylogarithmically in the number of web pages. We argue that the main topological feature of the underlying web graph allowing for such a scaling is the out-degree distribution. The top-ranked log(n) entries of the quantum PageRank state can then be estimated with a polynomial quantum speed-up. Moreover, the quantum PageRank state can be used in "q-sampling" protocols for testing properties of distributions, which require exponentially fewer measurements than all classical schemes designed for the same task. This can be used to decide whether to run a classical update of the PageRank. PMID:23003933

  19. Subsurface biological activity zone detection using genetic search algorithms

    SciTech Connect

    Mahinthakumar, G.; Gwo, J.P.; Moline, G.R.; Webb, O.F.

    1999-12-01

    Use of generic search algorithms for detection of subsurface biological activity zones (BAZ) is investigated through a series of hypothetical numerical biostimulation experiments. Continuous injection of dissolved oxygen and methane with periodically varying concentration stimulates the cometabolism of indigenous methanotropic bacteria. The observed breakthroughs of methane are used to deduce possible BAZ in the subsurface. The numerical experiments are implemented in a parallel computing environment to make possible the large number of simultaneous transport simulations required by the algorithm. The results show that genetic algorithms are very efficient in locating multiple activity zones, provided the observed signals adequately sample the BAZ.

  20. Heterogeneous Ensemble Combination Search Using Genetic Algorithm for Class Imbalanced Data Classification.

    PubMed

    Haque, Mohammad Nazmul; Noman, Nasimul; Berretta, Regina; Moscato, Pablo

    2016-01-01

    Classification of datasets with imbalanced sample distributions has always been a challenge. In general, a popular approach for enhancing classification performance is the construction of an ensemble of classifiers. However, the performance of an ensemble is dependent on the choice of constituent base classifiers. Therefore, we propose a genetic algorithm-based search method for finding the optimum combination from a pool of base classifiers to form a heterogeneous ensemble. The algorithm, called GA-EoC, utilises 10 fold-cross validation on training data for evaluating the quality of each candidate ensembles. In order to combine the base classifiers decision into ensemble's output, we used the simple and widely used majority voting approach. The proposed algorithm, along with the random sub-sampling approach to balance the class distribution, has been used for classifying class-imbalanced datasets. Additionally, if a feature set was not available, we used the (α, β) - k Feature Set method to select a better subset of features for classification. We have tested GA-EoC with three benchmarking datasets from the UCI-Machine Learning repository, one Alzheimer's disease dataset and a subset of the PubFig database of Columbia University. In general, the performance of the proposed method on the chosen datasets is robust and better than that of the constituent base classifiers and many other well-known ensembles. Based on our empirical study we claim that a genetic algorithm is a superior and reliable approach to heterogeneous ensemble construction and we expect that the proposed GA-EoC would perform consistently in other cases. PMID:26764911

  1. Dynamical analysis of Grover's search algorithm in arbitrarily high-dimensional search spaces

    NASA Astrophysics Data System (ADS)

    Jin, Wenliang

    2016-01-01

    We discuss at length the dynamical behavior of Grover's search algorithm for which all the Walsh-Hadamard transformations contained in this algorithm are exposed to their respective random perturbations inducing the augmentation of the dimension of the search space. We give the concise and general mathematical formulations for approximately characterizing the maximum success probabilities of finding a unique desired state in a large unsorted database and their corresponding numbers of Grover iterations, which are applicable to the search spaces of arbitrary dimension and are used to answer a salient open problem posed by Grover (Phys Rev Lett 80:4329-4332, 1998).

  2. An ant colony algorithm on continuous searching space

    NASA Astrophysics Data System (ADS)

    Xie, Jing; Cai, Chao

    2015-12-01

    Ant colony algorithm is heuristic, bionic and parallel. Because of it is property of positive feedback, parallelism and simplicity to cooperate with other method, it is widely adopted in planning on discrete space. But it is still not good at planning on continuous space. After a basic introduction to the basic ant colony algorithm, we will propose an ant colony algorithm on continuous space. Our method makes use of the following three tricks. We search for the next nodes of the route according to fixed-step to guarantee the continuity of solution. When storing pheromone, it discretizes field of pheromone, clusters states and sums up the values of pheromone of these states. When updating pheromone, it makes good resolutions measured in relative score functions leave more pheromone, so that ant colony algorithm can find a sub-optimal solution in shorter time. The simulated experiment shows that our ant colony algorithm can find sub-optimal solution in relatively shorter time.

  3. An enhanced dynamic hash TRIE algorithm for lexicon search

    NASA Astrophysics Data System (ADS)

    Yang, Lai; Xu, Lida; Shi, Zhongzhi

    2012-11-01

    Information retrieval (IR) is essential to enterprise systems along with growing orders, customers and materials. In this article, an enhanced dynamic hash TRIE (eDH-TRIE) algorithm is proposed that can be used in a lexicon search in Chinese, Japanese and Korean (CJK) segmentation and in URL identification. In particular, the eDH-TRIE algorithm is suitable for Unicode retrieval. The Auto-Array algorithm and Hash-Array algorithm are proposed to handle the auxiliary memory allocation; the former changes its size on demand without redundant restructuring, and the latter replaces linked lists with arrays, saving the overhead of memory. Comparative experiments show that the Auto-Array algorithm and Hash-Array algorithm have better spatial performance; they can be used in a multitude of situations. The eDH-TRIE is evaluated for both speed and storage and compared with the naïve DH-TRIE algorithms. The experiments show that the eDH-TRIE algorithm performs better. These algorithms reduce memory overheads and speed up IR.

  4. Computing gap free Pareto front approximations with stochastic search algorithms.

    PubMed

    Schütze, Oliver; Laumanns, Marco; Tantar, Emilia; Coello, Carlos A Coello; Talbi, El-Ghazali

    2010-01-01

    Recently, a convergence proof of stochastic search algorithms toward finite size Pareto set approximations of continuous multi-objective optimization problems has been given. The focus was on obtaining a finite approximation that captures the entire solution set in some suitable sense, which was defined by the concept of epsilon-dominance. Though bounds on the quality of the limit approximation-which are entirely determined by the archiving strategy and the value of epsilon-have been obtained, the strategies do not guarantee to obtain a gap free approximation of the Pareto front. That is, such approximations A can reveal gaps in the sense that points f in the Pareto front can exist such that the distance of f to any image point F(a), a epsilon A, is "large." Since such gap free approximations are desirable in certain applications, and the related archiving strategies can be advantageous when memetic strategies are included in the search process, we are aiming in this work for such methods. We present two novel strategies that accomplish this task in the probabilistic sense and under mild assumptions on the stochastic search algorithm. In addition to the convergence proofs, we give some numerical results to visualize the behavior of the different archiving strategies. Finally, we demonstrate the potential for a possible hybridization of a given stochastic search algorithm with a particular local search strategy-multi-objective continuation methods-by showing that the concept of epsilon-dominance can be integrated into this approach in a suitable way.

  5. Stochastic Leader Gravitational Search Algorithm for Enhanced Adaptive Beamforming Technique

    PubMed Central

    Darzi, Soodabeh; Islam, Mohammad Tariqul; Tiong, Sieh Kiong; Kibria, Salehin; Singh, Mandeep

    2015-01-01

    In this paper, stochastic leader gravitational search algorithm (SL-GSA) based on randomized k is proposed. Standard GSA (SGSA) utilizes the best agents without any randomization, thus it is more prone to converge at suboptimal results. Initially, the new approach randomly choses k agents from the set of all agents to improve the global search ability. Gradually, the set of agents is reduced by eliminating the agents with the poorest performances to allow rapid convergence. The performance of the SL-GSA was analyzed for six well-known benchmark functions, and the results are compared with SGSA and some of its variants. Furthermore, the SL-GSA is applied to minimum variance distortionless response (MVDR) beamforming technique to ensure compatibility with real world optimization problems. The proposed algorithm demonstrates superior convergence rate and quality of solution for both real world problems and benchmark functions compared to original algorithm and other recent variants of SGSA. PMID:26552032

  6. Study of genetic direct search algorithms for function optimization

    NASA Technical Reports Server (NTRS)

    Zeigler, B. P.

    1974-01-01

    The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.

  7. Enhancing artificial bee colony algorithm with self-adaptive searching strategy and artificial immune network operators for global optimization.

    PubMed

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  8. Enhancing Artificial Bee Colony Algorithm with Self-Adaptive Searching Strategy and Artificial Immune Network Operators for Global Optimization

    PubMed Central

    Chen, Tinggui; Xiao, Renbin

    2014-01-01

    Artificial bee colony (ABC) algorithm, inspired by the intelligent foraging behavior of honey bees, was proposed by Karaboga. It has been shown to be superior to some conventional intelligent algorithms such as genetic algorithm (GA), artificial colony optimization (ACO), and particle swarm optimization (PSO). However, the ABC still has some limitations. For example, ABC can easily get trapped in the local optimum when handing in functions that have a narrow curving valley, a high eccentric ellipse, or complex multimodal functions. As a result, we proposed an enhanced ABC algorithm called EABC by introducing self-adaptive searching strategy and artificial immune network operators to improve the exploitation and exploration. The simulation results tested on a suite of unimodal or multimodal benchmark functions illustrate that the EABC algorithm outperforms ACO, PSO, and the basic ABC in most of the experiments. PMID:24772023

  9. PCB drill path optimization by combinatorial cuckoo search algorithm.

    PubMed

    Lim, Wei Chen Esmonde; Kanagaraj, G; Ponnambalam, S G

    2014-01-01

    Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198

  10. PCB drill path optimization by combinatorial cuckoo search algorithm.

    PubMed

    Lim, Wei Chen Esmonde; Kanagaraj, G; Ponnambalam, S G

    2014-01-01

    Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process.

  11. PCB Drill Path Optimization by Combinatorial Cuckoo Search Algorithm

    PubMed Central

    Lim, Wei Chen Esmonde; Kanagaraj, G.; Ponnambalam, S. G.

    2014-01-01

    Optimization of drill path can lead to significant reduction in machining time which directly improves productivity of manufacturing systems. In a batch production of a large number of items to be drilled such as printed circuit boards (PCB), the travel time of the drilling device is a significant portion of the overall manufacturing process. To increase PCB manufacturing productivity and to reduce production costs, a good option is to minimize the drill path route using an optimization algorithm. This paper reports a combinatorial cuckoo search algorithm for solving drill path optimization problem. The performance of the proposed algorithm is tested and verified with three case studies from the literature. The computational experience conducted in this research indicates that the proposed algorithm is capable of efficiently finding the optimal path for PCB holes drilling process. PMID:24707198

  12. Evolving the stimulus to fit the brain: a genetic algorithm reveals the brain's feature priorities in visual search.

    PubMed

    Van der Burg, Erik; Cass, John; Theeuwes, Jan; Alais, David

    2015-02-06

    How does the brain find objects in cluttered visual environments? For decades researchers have employed the classic visual search paradigm to answer this question using factorial designs. Although such approaches have yielded important information, they represent only a tiny fraction of the possible parametric space. Here we use a novel approach, by using a genetic algorithm (GA) to discover the way the brain solves visual search in complex environments, free from experimenter bias. Participants searched a series of complex displays, and those supporting fastest search were selected to reproduce (survival of the fittest). Their display properties (genes) were crossed and combined to create a new generation of "evolved" displays. Displays evolved quickly over generations towards a stable, efficiently searched array. Color properties evolved first, followed by orientation. The evolved displays also contained spatial patterns suggesting a coarse-to-fine search strategy. We argue that this behavioral performance-driven GA reveals the way the brain selects information during visual search in complex environments. We anticipate that our approach can be adapted to a variety of sensory and cognitive questions that have proven too intractable for factorial designs.

  13. Moon Search Algorithms for NASA's Dawn Mission to Asteroid Vesta

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mcfadden, Lucy A.; Skillman, David R.; McLean, Brian; Mutchler, Max; Carsenty, Uri; Palmer, Eric E.

    2012-01-01

    A moon or natural satellite is a celestial body that orbits a planetary body such as a planet, dwarf planet, or an asteroid. Scientists seek understanding the origin and evolution of our solar system by studying moons of these bodies. Additionally, searches for satellites of planetary bodies can be important to protect the safety of a spacecraft as it approaches or orbits a planetary body. If a satellite of a celestial body is found, the mass of that body can also be calculated once its orbit is determined. Ensuring the Dawn spacecraft's safety on its mission to the asteroid Vesta primarily motivated the work of Dawn's Satellite Working Group (SWG) in summer of 2011. Dawn mission scientists and engineers utilized various computational tools and techniques for Vesta's satellite search. The objectives of this paper are to 1) introduce the natural satellite search problem, 2) present the computational challenges, approaches, and tools used when addressing this problem, and 3) describe applications of various image processing and computational algorithms for performing satellite searches to the electronic imaging and computer science community. Furthermore, we hope that this communication would enable Dawn mission scientists to improve their satellite search algorithms and tools and be better prepared for performing the same investigation in 2015, when the spacecraft is scheduled to approach and orbit the dwarf planet Ceres.

  14. Generalized pattern search algorithms with adaptive precision function evaluations

    SciTech Connect

    Polak, Elijah; Wetter, Michael

    2003-05-14

    In the literature on generalized pattern search algorithms, convergence to a stationary point of a once continuously differentiable cost function is established under the assumption that the cost function can be evaluated exactly. However, there is a large class of engineering problems where the numerical evaluation of the cost function involves the solution of systems of differential algebraic equations. Since the termination criteria of the numerical solvers often depend on the design parameters, computer code for solving these systems usually defines a numerical approximation to the cost function that is discontinuous with respect to the design parameters. Standard generalized pattern search algorithms have been applied heuristically to such problems, but no convergence properties have been stated. In this paper we extend a class of generalized pattern search algorithms to a form that uses adaptive precision approximations to the cost function. These numerical approximations need not define a continuous function. Our algorithms can be used for solving linearly constrained problems with cost functions that are at least locally Lipschitz continuous. Assuming that the cost function is smooth, we prove that our algorithms converge to a stationary point. Under the weaker assumption that the cost function is only locally Lipschitz continuous, we show that our algorithms converge to points at which the Clarke generalized directional derivatives are nonnegative in predefined directions. An important feature of our adaptive precision scheme is the use of coarse approximations in the early iterations, with the approximation precision controlled by a test. Such an approach leads to substantial time savings in minimizing computationally expensive functions.

  15. Time Optimal Algorithms for Black Hole Search in Rings

    NASA Astrophysics Data System (ADS)

    Balamohan, Balasingham; Flocchini, Paola; Miri, Ali; Santoro, Nicola

    In a network environments supporting mobile entities (called robots or agents), a black hole is harmful site that destroys any incoming entity without leaving any visible trace. The black-hole search problem is the task of a team of k > 1 mobile entities, starting from the same safe location and executing the same algorithm, to determine within finite time the location of the black hole. In this paper we consider the black hole search problem in asynchronous ring networks of n nodes, and focus on the time complexity.

  16. Approximation of HRPITS results for SI GaAs by large scale support vector machine algorithms

    NASA Astrophysics Data System (ADS)

    Jankowski, Stanisław; Wojdan, Konrad; Szymański, Zbigniew; Kozłowski, Roman

    2006-10-01

    For the first time large-scale support vector machine algorithms are used to extraction defect parameters in semi-insulating (SI) GaAs from high resolution photoinduced transient spectroscopy experiment. By smart decomposition of the data set the SVNTorch algorithm enabled to obtain good approximation of analyzed correlation surface by a parsimonious model (with small number of support vector). The extracted parameters of deep level defect centers from SVM approximation are of good quality as compared to the reference data.

  17. The royal road for genetic algorithms: Fitness landscapes and GA performance

    SciTech Connect

    Mitchell, M.; Holland, J.H. ); Forrest, S. . Dept. of Computer Science)

    1991-01-01

    Genetic algorithms (GAs) play a major role in many artificial-life systems, but there is often little detailed understanding of why the GA performs as it does, and little theoretical basis on which to characterize the types of fitness landscapes that lead to successful GA performance. In this paper we propose a strategy for addressing these issues. Our strategy consists of defining a set of features of fitness landscapes that are particularly relevant to the GA, and experimentally studying how various configurations of these features affect the GA's performance along a number of dimensions. In this paper we informally describe an initial set of proposed feature classes, describe in detail one such class ( Royal Road'' functions), and present some initial experimental results concerning the role of crossover and building blocks'' on landscapes constructed from features of this class. 27 refs., 1 fig., 5 tabs.

  18. DeMAID/GA USER'S GUIDE Design Manager's Aid for Intelligent Decomposition with a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1996-01-01

    Many companies are looking for new tools and techniques to aid a design manager in making decisions that can reduce the time and cost of a design cycle. One tool that is available to aid in this decision making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). Since the initial release of DEMAID in 1989, numerous enhancements have been added to aid the design manager in saving both cost and time in a design cycle. The key enhancement is a genetic algorithm (GA) and the enhanced version is called DeMAID/GA. The GA orders the sequence of design processes to minimize the cost and time to converge to a solution. These enhancements as well as the existing features of the original version of DEMAID are described. Two sample problems are used to show how these enhancements can be applied to improve the design cycle. This report serves as a user's guide for DeMAID/GA.

  19. Evolutionary pattern search algorithms for unconstrained and linearly constrained optimization

    SciTech Connect

    HART,WILLIAM E.

    2000-06-01

    The authors describe a convergence theory for evolutionary pattern search algorithms (EPSAs) on a broad class of unconstrained and linearly constrained problems. EPSAs adaptively modify the step size of the mutation operator in response to the success of previous optimization steps. The design of EPSAs is inspired by recent analyses of pattern search methods. The analysis significantly extends the previous convergence theory for EPSAs. The analysis applies to a broader class of EPSAs,and it applies to problems that are nonsmooth, have unbounded objective functions, and which are linearly constrained. Further, they describe a modest change to the algorithmic framework of EPSAs for which a non-probabilistic convergence theory applies. These analyses are also noteworthy because they are considerably simpler than previous analyses of EPSAs.

  20. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    NASA Astrophysics Data System (ADS)

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  1. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

    PubMed

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  2. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm.

    PubMed

    Sheng, Zheng; Wang, Jun; Zhou, Shudao; Zhou, Bihua

    2014-03-01

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm. PMID:24697395

  3. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    SciTech Connect

    Sheng, Zheng; Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented to tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.

  4. Private algorithms for the protected in social network search.

    PubMed

    Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory

    2016-01-26

    Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets. PMID:26755606

  5. Real-time algorithm for robust coincidence search

    SciTech Connect

    Petrovic, T.; Vencelj, M.; Lipoglavsek, M.; Gajevic, J.; Pelicon, P.

    2012-10-20

    In in-beam {gamma}-ray spectroscopy experiments, we often look for coincident detection events. Among every N events detected, coincidence search is naively of principal complexity O(N{sup 2}). When we limit the approximate width of the coincidence search window, the complexity can be reduced to O(N), permitting the implementation of the algorithm into real-time measurements, carried out indefinitely. We have built an algorithm to find simultaneous events between two detection channels. The algorithm was tested in an experiment where coincidences between X and {gamma} rays detected in two HPGe detectors were observed in the decay of {sup 61}Cu. Functioning of the algorithm was validated by comparing calculated experimental branching ratio for EC decay and theoretical calculation for 3 selected {gamma}-ray energies for {sup 61}Cu decay. Our research opened a question on the validity of the adopted value of total angular momentum of the 656 keV state (J{sup {pi}} = 1/2{sup -}) in {sup 61}Ni.

  6. Private algorithms for the protected in social network search

    PubMed Central

    Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory

    2016-01-01

    Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets. PMID:26755606

  7. Quantum discord and entanglement in grover search algorithm

    NASA Astrophysics Data System (ADS)

    Ye, Bin; Zhang, Tingzhong; Qiu, Liang; Wang, Xuesong

    2016-06-01

    Imperfections and noise in realistic quantum computers may seriously affect the accuracy of quantum algorithms. In this article we explore the impact of static imperfections on quantum entanglement as well as non-entangled quantum correlations in Grover's search algorithm. Using the metrics of concurrence and geometric quantum discord, we show that both the evolution of entanglement and quantum discord in Grover algorithm can be restrained with the increasing strength of static imperfections. For very weak imperfections, the quantum entanglement and discord exhibit periodic behavior, while the periodicity will most certainly be destroyed with stronger imperfections. Moreover, entanglement sudden death may occur when the strength of static imperfections is greater than a certain threshold.

  8. A search algorithm for quantum state engineering and metrology

    NASA Astrophysics Data System (ADS)

    Knott, P. A.

    2016-07-01

    In this paper we present a search algorithm that finds useful optical quantum states which can be created with current technology. We apply the algorithm to the field of quantum metrology with the goal of finding states that can measure a phase shift to a high precision. Our algorithm efficiently produces a number of novel solutions: we find experimentally ready schemes to produce states that show significant improvements over the state-of-the-art, and can measure with a precision that beats the shot noise limit by over a factor of 4. Furthermore, these states demonstrate a robustness to moderate/high photon losses, and we present a conceptually simple measurement scheme that saturates the Cramér-Rao bound.

  9. A search algorithm for quantum state engineering and metrology

    NASA Astrophysics Data System (ADS)

    Knott, P. A.

    2016-07-01

    In this paper we present a search algorithm that finds useful optical quantum states which can be created with current technology. We apply the algorithm to the field of quantum metrology with the goal of finding states that can measure a phase shift to a high precision. Our algorithm efficiently produces a number of novel solutions: we find experimentally ready schemes to produce states that show significant improvements over the state-of-the-art, and can measure with a precision that beats the shot noise limit by over a factor of 4. Furthermore, these states demonstrate a robustness to moderate/high photon losses, and we present a conceptually simple measurement scheme that saturates the Cramér–Rao bound.

  10. Cooperative mobile agents search using beehive partitioned structure and Tabu Random search algorithm

    NASA Astrophysics Data System (ADS)

    Ramazani, Saba; Jackson, Delvin L.; Selmic, Rastko R.

    2013-05-01

    In search and surveillance operations, deploying a team of mobile agents provides a robust solution that has multiple advantages over using a single agent in efficiency and minimizing exploration time. This paper addresses the challenge of identifying a target in a given environment when using a team of mobile agents by proposing a novel method of mapping and movement of agent teams in a cooperative manner. The approach consists of two parts. First, the region is partitioned into a hexagonal beehive structure in order to provide equidistant movements in every direction and to allow for more natural and flexible environment mapping. Additionally, in search environments that are partitioned into hexagons, mobile agents have an efficient travel path while performing searches due to this partitioning approach. Second, we use a team of mobile agents that move in a cooperative manner and utilize the Tabu Random algorithm to search for the target. Due to the ever-increasing use of robotics and Unmanned Aerial Vehicle (UAV) platforms, the field of cooperative multi-agent search has developed many applications recently that would benefit from the use of the approach presented in this work, including: search and rescue operations, surveillance, data collection, and border patrol. In this paper, the increased efficiency of the Tabu Random Search algorithm method in combination with hexagonal partitioning is simulated, analyzed, and advantages of this approach are presented and discussed.

  11. Stride search: A general algorithm for storm detection in high resolution climate data

    DOE PAGES

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropicalmore » cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.« less

  12. Stride search: A general algorithm for storm detection in high resolution climate data

    SciTech Connect

    Bosler, Peter Andrew; Roesler, Erika Louise; Taylor, Mark A.; Mundt, Miranda

    2015-09-08

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared. The commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. Stride Search is designed to work at all latitudes, while grid point searches may fail in polar regions. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. The time required for both algorithms to search the same data set is compared. Furthermore, Stride Search's ability to search extreme latitudes is demonstrated for the case of polar low detection.

  13. Application of GA, PSO, and ACO algorithms to path planning of autonomous underwater vehicles

    NASA Astrophysics Data System (ADS)

    Aghababa, Mohammad Pourmahmood; Amrollahi, Mohammad Hossein; Borjkhani, Mehdi

    2012-09-01

    In this paper, an underwater vehicle was modeled with six dimensional nonlinear equations of motion, controlled by DC motors in all degrees of freedom. Near-optimal trajectories in an energetic environment for underwater vehicles were computed using a numerical solution of a nonlinear optimal control problem (NOCP). An energy performance index as a cost function, which should be minimized, was defined. The resulting problem was a two-point boundary value problem (TPBVP). A genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO) algorithms were applied to solve the resulting TPBVP. Applying an Euler-Lagrange equation to the NOCP, a conjugate gradient penalty method was also adopted to solve the TPBVP. The problem of energetic environments, involving some energy sources, was discussed. Some near-optimal paths were found using a GA, PSO, and ACO algorithms. Finally, the problem of collision avoidance in an energetic environment was also taken into account.

  14. SPLICER - A GENETIC ALGORITHM TOOL FOR SEARCH AND OPTIMIZATION, VERSION 1.0 (MACINTOSH VERSION)

    NASA Technical Reports Server (NTRS)

    Wang, L.

    1994-01-01

    SPLICER is a genetic algorithm tool which can be used to solve search and optimization problems. Genetic algorithms are adaptive search procedures (i.e. problem solving methods) based loosely on the processes of natural selection and Darwinian "survival of the fittest." SPLICER provides the underlying framework and structure for building a genetic algorithm application. These algorithms apply genetically-inspired operators to populations of potential solutions in an iterative fashion, creating new populations while searching for an optimal or near-optimal solution to the problem at hand. SPLICER 1.0 was created using a modular architecture that includes a Genetic Algorithm Kernel, interchangeable Representation Libraries, Fitness Modules and User Interface Libraries, and well-defined interfaces between these components. The architecture supports portability, flexibility, and extensibility. SPLICER comes with all source code and several examples. For instance, a "traveling salesperson" example searches for the minimum distance through a number of cities visiting each city only once. Stand-alone SPLICER applications can be used without any programming knowledge. However, to fully utilize SPLICER within new problem domains, familiarity with C language programming is essential. SPLICER's genetic algorithm (GA) kernel was developed independent of representation (i.e. problem encoding), fitness function or user interface type. The GA kernel comprises all functions necessary for the manipulation of populations. These functions include the creation of populations and population members, the iterative population model, fitness scaling, parent selection and sampling, and the generation of population statistics. In addition, miscellaneous functions are included in the kernel (e.g., random number generators). Different problem-encoding schemes and functions are defined and stored in interchangeable representation libraries. This allows the GA kernel to be used with any

  15. KMeans greedy search hybrid algorithm for biclustering gene expression data.

    PubMed

    Das, Shyama; Idicula, Sumam Mary

    2010-01-01

    Microarray technology demands the development of algorithms capable of extracting novel and useful patterns like biclusters. A bicluster is a submatrix of the gene expression datamatrix such that the genes show highly correlated activities across all conditions in the submatrix. A measure called Mean Squared Residue (MSR) is used to evaluate the coherence of rows and columns within the submatrix. In this paper, the KMeans greedy search hybrid algorithm is developed for finding biclusters from the gene expression data. This algorithm has two steps. In the first step, high quality bicluster seeds are generated using KMeans clustering algorithm. In the second step, these seeds are enlarged by adding more genes and conditions using the greedy strategy. Here, the objective is to find the biclusters with maximum size and the MSR value lower than a given threshold. The biclusters obtained from this algorithm on both the bench mark datasets are of high quality. The statistical significance and biological relevance of the biclusters are verified using gene ontology database.

  16. 3D magnetic sources' framework estimation using Genetic Algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Ponte-Neto, C. F.; Barbosa, V. C.

    2008-05-01

    We present a method for inverting total-field anomaly for determining simple 3D magnetic sources' framework such as: batholiths, dikes, sills, geological contacts, kimberlite and lamproite pipes. We use GA to obtain magnetic sources' frameworks and their magnetic features simultaneously. Specifically, we estimate the magnetization direction (inclination and declination) and the total dipole moment intensity, and the horizontal and vertical positions, in Cartesian coordinates , of a finite set of elementary magnetic dipoles. The spatial distribution of these magnetic dipoles composes the skeletal outlines of the geologic sources. We assume that the geologic sources have a homogeneous magnetization distribution and, thus all dipoles have the same magnetization direction and dipole moment intensity. To implement the GA, we use real-valued encoding with crossover, mutation, and elitism. To obtain a unique and stable solution, we set upper and lower bounds on declination and inclination of [0,360°] and [-90°, 90°], respectively. We also set the criterion of minimum scattering of the dipole-position coordinates, to guarantee that spatial distribution of the dipoles (defining the source skeleton) be as close as possible to continuous distribution. To this end, we fix the upper and lower bounds of the dipole moment intensity and we evaluate the dipole-position estimates. If the dipole scattering is greater than a value expected by the interpreter, the upper bound of the dipole moment intensity is reduced by 10 % of the latter. We repeat this procedure until the dipole scattering and the data fitting are acceptable. We apply our method to noise-corrupted magnetic data from simulated 3D magnetic sources with simple geometries and located at different depths. In tests simulating sources such as sphere and cube, all estimates of the dipole coordinates are agreeing with center of mass of these sources. To elongated-prismatic sources in an arbitrary direction, we estimate

  17. Algorithm for Rapid Searching Among Star-Catalog Entries

    NASA Technical Reports Server (NTRS)

    Liebe, Carl Christian

    2006-01-01

    An algorithm searches a star catalog to identify guide stars within the field of view of a telescope or camera. The algorithm is fast: the number of computations needed to perform the search is approximately proportional to the logarithm of the number of stars in the catalog. The algorithm requires the prior organization of the star catalog into a hierarchy utilizing independent spherical coverings (see figure), such that each successively higher level contains fewer elements. In the lowest and most numerous level of the hierarchy, the elements are individual stars in the star catalog. The next higher level contains a spherical covering (a constellation of n points on a sphere that minimizes the maximum distance of any point on the sphere from the closest one of the n points), the next higher level contains a smaller spherical covering, and so forth, ending at the highest level, which contains one element representing the point of entry into the search structure. With necessary exceptions at the lowest and highest levels, each element at each level is labeled in terms of the element to which it is linked in the next higher level and the first element to which it is linked in the next lower level. Each element is also labeled in terms of (1) its coordinates on the celestial sphere and (2) the largest angular distance to any element in any lower level in the hierarchy. The elements at all levels of the hierarchy are numbered on a single list, such that the elements of each constellation at each level are numbered consecutively. The algorithm is recursive. The input required to start the algorithm comprises the coordinates of a point on the celestial sphere. Attention is then focused on individual elements of the hierarchy, starting from the topmost one, as follows: The angle between the input point and the element under consideration is calculated. If the calculated angle is larger than the sum of (1) the predetermined angle to the most distant element plus (2) the

  18. Protein structure prediction with local adjust tabu search algorithm

    PubMed Central

    2014-01-01

    Background Protein folding structure prediction is one of the most challenging problems in the bioinformatics domain. Because of the complexity of the realistic protein structure, the simplified structure model and the computational method should be adopted in the research. The AB off-lattice model is one of the simplification models, which only considers two classes of amino acids, hydrophobic (A) residues and hydrophilic (B) residues. Results The main work of this paper is to discuss how to optimize the lowest energy configurations in 2D off-lattice model and 3D off-lattice model by using Fibonacci sequences and real protein sequences. In order to avoid falling into local minimum and faster convergence to the global minimum, we introduce a novel method (SATS) to the protein structure problem, which combines simulated annealing algorithm and tabu search algorithm. Various strategies, such as the new encoding strategy, the adaptive neighborhood generation strategy and the local adjustment strategy, are adopted successfully for high-speed searching the optimal conformation corresponds to the lowest energy of the protein sequences. Experimental results show that some of the results obtained by the improved SATS are better than those reported in previous literatures, and we can sure that the lowest energy folding state for short Fibonacci sequences have been found. Conclusions Although the off-lattice models is not very realistic, they can reflect some important characteristics of the realistic protein. It can be found that 3D off-lattice model is more like native folding structure of the realistic protein than 2D off-lattice model. In addition, compared with some previous researches, the proposed hybrid algorithm can more effectively and more quickly search the spatial folding structure of a protein chain. PMID:25474708

  19. A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.

    PubMed

    Ali, Ahmed F; Tawhid, Mohamed A

    2016-01-01

    Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time. PMID:27217988

  20. A hybrid cuckoo search algorithm with Nelder Mead method for solving global optimization problems.

    PubMed

    Ali, Ahmed F; Tawhid, Mohamed A

    2016-01-01

    Cuckoo search algorithm is a promising metaheuristic population based method. It has been applied to solve many real life problems. In this paper, we propose a new cuckoo search algorithm by combining the cuckoo search algorithm with the Nelder-Mead method in order to solve the integer and minimax optimization problems. We call the proposed algorithm by hybrid cuckoo search and Nelder-Mead method (HCSNM). HCSNM starts the search by applying the standard cuckoo search for number of iterations then the best obtained solution is passing to the Nelder-Mead algorithm as an intensification process in order to accelerate the search and overcome the slow convergence of the standard cuckoo search algorithm. The proposed algorithm is balancing between the global exploration of the Cuckoo search algorithm and the deep exploitation of the Nelder-Mead method. We test HCSNM algorithm on seven integer programming problems and ten minimax problems and compare against eight algorithms for solving integer programming problems and seven algorithms for solving minimax problems. The experiments results show the efficiency of the proposed algorithm and its ability to solve integer and minimax optimization problems in reasonable time.

  1. DFT algorithms for bit-serial GaAs array processor architectures

    NASA Technical Reports Server (NTRS)

    Mcmillan, Gary B.

    1988-01-01

    Systems and Processes Engineering Corporation (SPEC) has developed an innovative array processor architecture for computing Fourier transforms and other commonly used signal processing algorithms. This architecture is designed to extract the highest possible array performance from state-of-the-art GaAs technology. SPEC's architectural design includes a high performance RISC processor implemented in GaAs, along with a Floating Point Coprocessor and a unique Array Communications Coprocessor, also implemented in GaAs technology. Together, these data processors represent the latest in technology, both from an architectural and implementation viewpoint. SPEC has examined numerous algorithms and parallel processing architectures to determine the optimum array processor architecture. SPEC has developed an array processor architecture with integral communications ability to provide maximum node connectivity. The Array Communications Coprocessor embeds communications operations directly in the core of the processor architecture. A Floating Point Coprocessor architecture has been defined that utilizes Bit-Serial arithmetic units, operating at very high frequency, to perform floating point operations. These Bit-Serial devices reduce the device integration level and complexity to a level compatible with state-of-the-art GaAs device technology.

  2. Stride Search: a general algorithm for storm detection in high-resolution climate data

    NASA Astrophysics Data System (ADS)

    Bosler, Peter A.; Roesler, Erika L.; Taylor, Mark A.; Mundt, Miranda R.

    2016-04-01

    This article discusses the problem of identifying extreme climate events such as intense storms within large climate data sets. The basic storm detection algorithm is reviewed, which splits the problem into two parts: a spatial search followed by a temporal correlation problem. Two specific implementations of the spatial search algorithm are compared: the commonly used grid point search algorithm is reviewed, and a new algorithm called Stride Search is introduced. The Stride Search algorithm is defined independently of the spatial discretization associated with a particular data set. Results from the two algorithms are compared for the application of tropical cyclone detection, and shown to produce similar results for the same set of storm identification criteria. Differences between the two algorithms arise for some storms due to their different definition of search regions in physical space. The physical space associated with each Stride Search region is constant, regardless of data resolution or latitude, and Stride Search is therefore capable of searching all regions of the globe in the same manner. Stride Search's ability to search high latitudes is demonstrated for the case of polar low detection. Wall clock time required for Stride Search is shown to be smaller than a grid point search of the same data, and the relative speed up associated with Stride Search increases as resolution increases.

  3. Multipath Separation-Direction of Arrival (MS-DOA) with Genetic Search Algorithm for HF channels

    NASA Astrophysics Data System (ADS)

    Arikan, Feza; Koroglu, Ozan; Fidan, Serdar; Arikan, Orhan; Guldogan, Mehmet B.

    2009-09-01

    Direction-of-Arrival (DOA) defines the estimation of arrival angles of an electromagnetic wave impinging on a set of sensors. For dispersive and time-varying HF channels, where the propagating wave also suffers from the multipath phenomena, estimation of DOA is a very challenging problem. Multipath Separation-Direction of Arrival (MS-DOA), that is developed to estimate both the arrival angles in elevation and azimuth and the incoming signals at the output of the reference antenna with very high accuracy, proves itself as a strong alternative in DOA estimation for HF channels. In MS-DOA, a linear system of equations is formed using the coefficients of the basis vector for the array output vector, the incoming signal vector and the array manifold. The angles of arrival in elevation and azimuth are obtained as the maximizers of the sum of the magnitude squares of the projection of the signal coefficients on the column space of the array manifold. In this study, alternative Genetic Search Algorithms (GA) for the maximizers of the projection sum are investigated using simulated and experimental ionospheric channel data. It is observed that GA combined with MS-DOA is a powerful alternative in online DOA estimation and can be further developed according to the channel characteristics of a specific HF link.

  4. An Improved Population Migration Algorithm Introducing the Local Search Mechanism of the Leap-Frog Algorithm and Crossover Operator

    PubMed Central

    Zhang, Yanqing; Liu, Xueying

    2013-01-01

    The population migration algorithm (PMA) is a simulation of a population of the intelligent algorithm. Given the prematurity and low precision of PMA, this paper introduces a local search mechanism of the leap-frog algorithm and crossover operator to improve the PMA search speed and global convergence properties. The typical test function verifies the improved algorithm through its performance. Compared with the improved population migration and other intelligential algorithms, the result shows that the convergence rate of the improved PMA is very high and its convergence is proved. PMID:23460807

  5. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert Manfred

    2013-01-01

    A new regression model search algorithm was developed in 2011 that may be used to analyze both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The new algorithm is a simplified version of a more complex search algorithm that was originally developed at the NASA Ames Balance Calibration Laboratory. The new algorithm has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression models. Therefore, the simplified search algorithm is not intended to replace the original search algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm either fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new regression model search algorithm.

  6. A multi-objective discrete cuckoo search algorithm with local search for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Liu, Yanheng; Li, Bin

    2016-03-01

    Detecting community is a challenging task in analyzing networks. Solving community detection problem by evolutionary algorithm is a heated topic in recent years. In this paper, a multi-objective discrete cuckoo search algorithm with local search (MDCL) for community detection is proposed. To the best of our knowledge, it is first time to apply cuckoo search algorithm for community detection. Two objective functions termed as negative ratio association and ratio cut are to be minimized. These two functions can break through the modularity limitation. In the proposed algorithm, the nest location updating strategy and abandon operator of cuckoo are redefined in discrete form. A local search strategy and a clone operator are proposed to obtain the optimal initial population. The experimental results on synthetic and real-world networks show that the proposed algorithm has better performance than other algorithms and can discover the higher quality community structure without prior information.

  7. GA-fisher: A new LDA-based face recognition algorithm with selection of principal components.

    PubMed

    Zheng, Wei-Shi; Lai, Jian-Huang; Yuen, Pong C

    2005-10-01

    This paper addresses the dimension reduction problem in Fisherface for face recognition. When the number of training samples is less than the image dimension (total number of pixels), the within-class scatter matrix (Sw) in Linear Discriminant Analysis (LDA) is singular, and Principal Component Analysis (PCA) is suggested to employ in Fisherface for dimension reduction of Sw so that it becomes nonsingular. The popular method is to select the largest nonzero eigenvalues and the corresponding eigenvectors for LDA. To attenuate the illumination effect, some researchers suggested removing the three eigenvectors with the largest eigenvalues and the performance is improved. However, as far as we know, there is no systematic way to determine which eigenvalues should be used. Along this line, this paper proposes a theorem to interpret why PCA can be used in LDA and an automatic and systematic method to select the eigenvectors to be used in LDA using a Genetic Algorithm (GA). A GA-PCA is then developed. It is found that some small eigenvectors should also be used as part of the basis for dimension reduction. Using the GA-PCA to reduce the dimension, a GA-Fisher method is designed and developed. Comparing with the traditional Fisherface method, the proposed GA-Fisher offers two additional advantages. First, optimal bases for dimensionality reduction are derived from GA-PCA. Second, the computational efficiency of LDA is improved by adding a whitening procedure after dimension reduction. The Face Recognition Technology (FERET) and Carnegie Mellon University Pose, Illumination, and Expression (CMU PIE) databases are used for evaluation. Experimental results show that almost 5 % improvement compared with Fisherface can be obtained, and the results are encouraging.

  8. Algorithm for shortest path search in Geographic Information Systems by using reduced graphs.

    PubMed

    Rodríguez-Puente, Rafael; Lazo-Cortés, Manuel S

    2013-01-01

    The use of Geographic Information Systems has increased considerably since the eighties and nineties. As one of their most demanding applications we can mention shortest paths search. Several studies about shortest path search show the feasibility of using graphs for this purpose. Dijkstra's algorithm is one of the classic shortest path search algorithms. This algorithm is not well suited for shortest path search in large graphs. This is the reason why various modifications to Dijkstra's algorithm have been proposed by several authors using heuristics to reduce the run time of shortest path search. One of the most used heuristic algorithms is the A* algorithm, the main goal is to reduce the run time by reducing the search space. This article proposes a modification of Dijkstra's shortest path search algorithm in reduced graphs. It shows that the cost of the path found in this work, is equal to the cost of the path found using Dijkstra's algorithm in the original graph. The results of finding the shortest path, applying the proposed algorithm, Dijkstra's algorithm and A* algorithm, are compared. This comparison shows that, by applying the approach proposed, it is possible to obtain the optimal path in a similar or even in less time than when using heuristic algorithms.

  9. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm

    PubMed Central

    Yang, Zhang; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  10. Segmentation of MRI Brain Images with an Improved Harmony Searching Algorithm.

    PubMed

    Yang, Zhang; Shufan, Ye; Li, Guo; Weifeng, Ding

    2016-01-01

    The harmony searching (HS) algorithm is a kind of optimization search algorithm currently applied in many practical problems. The HS algorithm constantly revises variables in the harmony database and the probability of different values that can be used to complete iteration convergence to achieve the optimal effect. Accordingly, this study proposed a modified algorithm to improve the efficiency of the algorithm. First, a rough set algorithm was employed to improve the convergence and accuracy of the HS algorithm. Then, the optimal value was obtained using the improved HS algorithm. The optimal value of convergence was employed as the initial value of the fuzzy clustering algorithm for segmenting magnetic resonance imaging (MRI) brain images. Experimental results showed that the improved HS algorithm attained better convergence and more accurate results than those of the original HS algorithm. In our study, the MRI image segmentation effect of the improved algorithm was superior to that of the original fuzzy clustering method. PMID:27403428

  11. Error tolerance in an NMR implementation of Grover's fixed-point quantum search algorithm

    SciTech Connect

    Xiao Li; Jones, Jonathan A.

    2005-09-15

    We describe an implementation of Grover's fixed-point quantum search algorithm on a nuclear magnetic resonance quantum computer, searching for either one or two matching items in an unsorted database of four items. In this algorithm the target state (an equally weighted superposition of the matching states) is a fixed point of the recursive search operator, so that the algorithm always moves towards the desired state. The effects of systematic errors in the implementation are briefly explored.

  12. [Recognition of the potential SF-1 binding sites by SiteGA method, their experimental verification and search for new SF-1 target genes].

    PubMed

    Klimova, N V; Levitskiĭ, V G; Ignat'eva, E V; Vasil'ev, G V; Kobzev, V F; Busygina, T V; Merkulova, T I; Kolchanov, N A

    2006-01-01

    The SF-1 (Steroidogenic Factor-1) is a transcription factor known as a key regulator of the steroidogenic gene expression. SF-1 is required for the development and functioning at all levels of the hypothalamic-pituitary-gonadal and adrenal axis. Also it plays an essential role in sex determination. SF-1 is a member of the nuclear receptor superfamily and it activates gene expression by binding to DNA in a monomeric form. Here, we report the results of potential SF-1 binding sites identification by using the SiteGA recognition method. The SiteGA method was implemented using a genetic algorithm (GA) involving a iterative discriminant analyses of local dinucleotide context characteristics. These characteristics were compiled not only over the core binding sites region but over its flanks as well. Developed SiteGA method is characterized by considerably better recognition accuracy when compared to that for the weight matrix method. The experimental tests demonstrated that 83% of the sites recognized by the SiteGA method in the regulatory regions of steroidogenic genes, indeed, interact with the SF-1 factor. We also estimated the density of predicted sites in regulatory region of genes, the members of different functional groups and developed the criterion to search for new SF-1 target genes in genome sequences.

  13. The use of a genetic algorithm-based search strategy in geostatistics: application to a set of anisotropic piezometric head data

    NASA Astrophysics Data System (ADS)

    Abedini, M. J.; Nasseri, M.; Burn, D. H.

    2012-04-01

    In any geostatistical study, an important consideration is the choice of an appropriate, repeatable, and objective search strategy that controls the nearby samples to be included in the location-specific estimation procedure. Almost all geostatistical software available in the market puts the onus on the user to supply search strategy parameters in a heuristic manner. These parameters are solely controlled by geographical coordinates that are defined for the entire area under study, and the user has no guidance as to how to choose these parameters. The main thesis of the current study is that the selection of search strategy parameters has to be driven by data—both the spatial coordinates and the sample values—and cannot be chosen beforehand. For this purpose, a genetic-algorithm-based ordinary kriging with moving neighborhood technique is proposed. The search capability of a genetic algorithm is exploited to search the feature space for appropriate, either local or global, search strategy parameters. Radius of circle/sphere and/or radii of standard or rotated ellipse/ellipsoid are considered as the decision variables to be optimized by GA. The superiority of GA-based ordinary kriging is demonstrated through application to the Wolfcamp Aquifer piezometric head data. Assessment of numerical results showed that definition of search strategy parameters based on both geographical coordinates and sample values improves cross-validation statistics when compared with that based on geographical coordinates alone. In the case of a variable search neighborhood for each estimation point, optimization of local search strategy parameters for an elliptical support domain—the orientation of which is dictated by anisotropic axes—via GA was able to capture the dynamics of piezometric head in west Texas/New Mexico in an efficient way.

  14. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  15. SNPs Selection using Gravitational Search Algorithm and Exhaustive Search for Association Mapping

    NASA Astrophysics Data System (ADS)

    Kusuma, W. A.; Hasibuan, L. S.; Istiadi, M. A.

    2016-01-01

    Single Nucleotide Polymorphisms (SNPs) are known having association to phenotipic variations. The study of linking SNPs to interest phenotype is refer to Association Mapping (AM), which is classified as a combinatorial problem. Exhaustive Search (ES) approach is able to be implemented to select targeted SNPs exactly since it evaluate all possible combinations of SNPs, but it is not efficient in terms of computer resources and computation time. Heuristic Search (HS) approach is an alternative to improve the performance of ES in those terms, but it still suffers high false positive SNPs in each combinations. Gravitational Search Algorithm (GSA) is a new HS algorithm that yields better performance than other nature inspired HS. This paper proposed a new method which combined GSA and ES to identify the most appropriate combination of SNPs linked to interest phenotype. Testing was conducted using dataset without epistasis and dataset with epistasis. Using dataset without epistasis with 7 targeted SNPs, the proposed method identified 7 SNPs - 6 True Positive (TP) SNPs and 1 False Positive (FP) SNP- with association value of 0.83. In addition, the proposed method could identified 3 SNPs- 2 TP SNP and 1 FP SNP with association value of 0.87 by using dataset with epistases and 5 targeted SNPs. The results showed that the method is robust in reducing redundant SNPs and identifying main markers.

  16. Polynomial search and global modeling: Two algorithms for modeling chaos.

    PubMed

    Mangiarotti, S; Coudret, R; Drapeau, L; Jarlan, L

    2012-10-01

    Global modeling aims to build mathematical models of concise description. Polynomial Model Search (PoMoS) and Global Modeling (GloMo) are two complementary algorithms (freely downloadable at the following address: http://www.cesbio.ups-tlse.fr/us/pomos_et_glomo.html) designed for the modeling of observed dynamical systems based on a small set of time series. Models considered in these algorithms are based on ordinary differential equations built on a polynomial formulation. More specifically, PoMoS aims at finding polynomial formulations from a given set of 1 to N time series, whereas GloMo is designed for single time series and aims to identify the parameters for a selected structure. GloMo also provides basic features to visualize integrated trajectories and to characterize their structure when it is simple enough: One allows for drawing the first return map for a chosen Poincaré section in the reconstructed space; another one computes the Lyapunov exponent along the trajectory. In the present paper, global modeling from single time series is considered. A description of the algorithms is given and three examples are provided. The first example is based on the three variables of the Rössler attractor. The second one comes from an experimental analysis of the copper electrodissolution in phosphoric acid for which a less parsimonious global model was obtained in a previous study. The third example is an exploratory case and concerns the cycle of rainfed wheat under semiarid climatic conditions as observed through a vegetation index derived from a spatial sensor.

  17. Optimal clustering of MGs based on droop controller for improving reliability using a hybrid of harmony search and genetic algorithms.

    PubMed

    Abedini, Mohammad; Moradi, Mohammad H; Hosseinian, S M

    2016-03-01

    This paper proposes a novel method to address reliability and technical problems of microgrids (MGs) based on designing a number of self-adequate autonomous sub-MGs via adopting MGs clustering thinking. In doing so, a multi-objective optimization problem is developed where power losses reduction, voltage profile improvement and reliability enhancement are considered as the objective functions. To solve the optimization problem a hybrid algorithm, named HS-GA, is provided, based on genetic and harmony search algorithms, and a load flow method is given to model different types of DGs as droop controller. The performance of the proposed method is evaluated in two case studies. The results provide support for the performance of the proposed method. PMID:26767800

  18. Analysis of Multivariate Experimental Data Using A Simplified Regression Model Search Algorithm

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert M.

    2013-01-01

    A new regression model search algorithm was developed that may be applied to both general multivariate experimental data sets and wind tunnel strain-gage balance calibration data. The algorithm is a simplified version of a more complex algorithm that was originally developed for the NASA Ames Balance Calibration Laboratory. The new algorithm performs regression model term reduction to prevent overfitting of data. It has the advantage that it needs only about one tenth of the original algorithm's CPU time for the completion of a regression model search. In addition, extensive testing showed that the prediction accuracy of math models obtained from the simplified algorithm is similar to the prediction accuracy of math models obtained from the original algorithm. The simplified algorithm, however, cannot guarantee that search constraints related to a set of statistical quality requirements are always satisfied in the optimized regression model. Therefore, the simplified algorithm is not intended to replace the original algorithm. Instead, it may be used to generate an alternate optimized regression model of experimental data whenever the application of the original search algorithm fails or requires too much CPU time. Data from a machine calibration of NASA's MK40 force balance is used to illustrate the application of the new search algorithm.

  19. Enhanced hybrid search algorithm for protein structure prediction using the 3D-HP lattice model.

    PubMed

    Zhou, Changjun; Hou, Caixia; Zhang, Qiang; Wei, Xiaopeng

    2013-09-01

    The problem of protein structure prediction in the hydrophobic-polar (HP) lattice model is the prediction of protein tertiary structure. This problem is usually referred to as the protein folding problem. This paper presents a method for the application of an enhanced hybrid search algorithm to the problem of protein folding prediction, using the three dimensional (3D) HP lattice model. The enhanced hybrid search algorithm is a combination of the particle swarm optimizer (PSO) and tabu search (TS) algorithms. Since the PSO algorithm entraps local minimum in later evolution extremely easily, we combined PSO with the TS algorithm, which has properties of global optimization. Since the technologies of crossover and mutation are applied many times to PSO and TS algorithms, so enhanced hybrid search algorithm is called the MCMPSO-TS (multiple crossover and mutation PSO-TS) algorithm. Experimental results show that the MCMPSO-TS algorithm can find the best solutions so far for the listed benchmarks, which will help comparison with any future paper approach. Moreover, real protein sequences and Fibonacci sequences are verified in the 3D HP lattice model for the first time. Compared with the previous evolutionary algorithms, the new hybrid search algorithm is novel, and can be used effectively to predict 3D protein folding structure. With continuous development and changes in amino acids sequences, the new algorithm will also make a contribution to the study of new protein sequences. PMID:23824509

  20. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    PubMed

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  1. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation

    PubMed Central

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior. PMID:26880874

  2. An Improved Cuckoo Search Optimization Algorithm for the Problem of Chaotic Systems Parameter Estimation.

    PubMed

    Wang, Jun; Zhou, Bihua; Zhou, Shudao

    2016-01-01

    This paper proposes an improved cuckoo search (ICS) algorithm to establish the parameters of chaotic systems. In order to improve the optimization capability of the basic cuckoo search (CS) algorithm, the orthogonal design and simulated annealing operation are incorporated in the CS algorithm to enhance the exploitation search ability. Then the proposed algorithm is used to establish parameters of the Lorenz chaotic system and Chen chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the algorithm can estimate parameters with high accuracy and reliability. Finally, the results are compared with the CS algorithm, genetic algorithm, and particle swarm optimization algorithm, and the compared results demonstrate the method is energy-efficient and superior.

  3. Experimental Results in the Comparison of Search Algorithms Used with Room Temperature Detectors

    SciTech Connect

    Guss, P., Yuan, D., Cutler, M., Beller, D.

    2010-11-01

    Analysis of time sequence data was run for several higher resolution scintillation detectors using a variety of search algorithms, and results were obtained in predicting the relative performance for these detectors, which included a slightly superior performance by CeBr{sub 3}. Analysis of several search algorithms shows that inclusion of the RSPRT methodology can improve sensitivity.

  4. Teaching AI Search Algorithms in a Web-Based Educational System

    ERIC Educational Resources Information Center

    Grivokostopoulou, Foteini; Hatzilygeroudis, Ioannis

    2013-01-01

    In this paper, we present a way of teaching AI search algorithms in a web-based adaptive educational system. Teaching is based on interactive examples and exercises. Interactive examples, which use visualized animations to present AI search algorithms in a step-by-step way with explanations, are used to make learning more attractive. Practice…

  5. Search for Active-State Conformation of Drug Target GPCR Using Real-Coded Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Ishino, Yoko; Harada, Takanori; Aida, Misako

    G-Protein coupled receptors (GPCRs) comprise a large superfamily of proteins and are a target for nearly 50% of drugs in clinical use today. GPCRs have a unique structural motif, seven transmembrane helices, and it is known that agonists and antagonists dock with a GPCR in its ``active'' and ``inactive'' condition, respectively. Knowing conformations of both states is eagerly anticipated for elucidation of drug action mechanism. Since GPCRs are difficult to crystallize, the 3D structures of these receptors have not yet been determined by X-ray crystallography, except the inactive-state conformation of two proteins. The conformation of them enabled the inactive form of other GPCRs to be modeled by computer-aided homology modeling. However, to date, the active form of GPCRs has not been solved. This paper describes a novel method to predict the 3D structure of an active-state GPCR aiming at molecular docking-based virtual screening using real-coded genetic algorithm (real-coded GA), receptor-ligand docking simulations, and molecular dynamics (MD) simulations. The basic idea of the method is that the MD is first used to calculate an average 3D coordinates of all atoms of a GPCR protein against heat fluctuation on the pico- or nano- second time scale, and then real-coded GA involving receptor-ligand docking simulations functions to determine the rotation angle of each helix as a movement on wider time scale. The method was validated using human leukotriene B4 receptor BLT1 as a sample GPCR. Our study demonstrated that the established evolutionary search for the active state of the leukotriene receptor provided the appropriate 3D structure of the receptor to dock with its agonists.

  6. A matter of timing: identifying significant multi-dose radiotherapy improvements by numerical simulation and genetic algorithm search.

    PubMed

    Angus, Simon D; Piotrowska, Monika Joanna

    2014-01-01

    Multi-dose radiotherapy protocols (fraction dose and timing) currently used in the clinic are the product of human selection based on habit, received wisdom, physician experience and intra-day patient timetabling. However, due to combinatorial considerations, the potential treatment protocol space for a given total dose or treatment length is enormous, even for relatively coarse search; well beyond the capacity of traditional in-vitro methods. In constrast, high fidelity numerical simulation of tumor development is well suited to the challenge. Building on our previous single-dose numerical simulation model of EMT6/Ro spheroids, a multi-dose irradiation response module is added and calibrated to the effective dose arising from 18 independent multi-dose treatment programs available in the experimental literature. With the developed model a constrained, non-linear, search for better performing cadidate protocols is conducted within the vicinity of two benchmarks by genetic algorithm (GA) techniques. After evaluating less than 0.01% of the potential benchmark protocol space, candidate protocols were identified by the GA which conferred an average of 9.4% (max benefit 16.5%) and 7.1% (13.3%) improvement (reduction) on tumour cell count compared to the two benchmarks, respectively. Noticing that a convergent phenomenon of the top performing protocols was their temporal synchronicity, a further series of numerical experiments was conducted with periodic time-gap protocols (10 h to 23 h), leading to the discovery that the performance of the GA search candidates could be replicated by 17-18 h periodic candidates. Further dynamic irradiation-response cell-phase analysis revealed that such periodicity cohered with latent EMT6/Ro cell-phase temporal patterning. Taken together, this study provides powerful evidence towards the hypothesis that even simple inter-fraction timing variations for a given fractional dose program may present a facile, and highly cost-effecitive means

  7. WS-BP: An efficient wolf search based back-propagation algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd; Rehman, M. Z.; Khan, Abdullah

    2015-05-01

    Wolf Search (WS) is a heuristic based optimization algorithm. Inspired by the preying and survival capabilities of the wolves, this algorithm is highly capable to search large spaces in the candidate solutions. This paper investigates the use of WS algorithm in combination with back-propagation neural network (BPNN) algorithm to overcome the local minima problem and to improve convergence in gradient descent. The performance of the proposed Wolf Search based Back-Propagation (WS-BP) algorithm is compared with Artificial Bee Colony Back-Propagation (ABC-BP), Bat Based Back-Propagation (Bat-BP), and conventional BPNN algorithms. Specifically, OR and XOR datasets are used for training the network. The simulation results show that the WS-BP algorithm effectively avoids the local minima and converge to global minima.

  8. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  9. Hybrid feature selection algorithm using symmetrical uncertainty and a harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Salameh Shreem, Salam; Abdullah, Salwani; Nazri, Mohd Zakree Ahmad

    2016-04-01

    Microarray technology can be used as an efficient diagnostic system to recognise diseases such as tumours or to discriminate between different types of cancers in normal tissues. This technology has received increasing attention from the bioinformatics community because of its potential in designing powerful decision-making tools for cancer diagnosis. However, the presence of thousands or tens of thousands of genes affects the predictive accuracy of this technology from the perspective of classification. Thus, a key issue in microarray data is identifying or selecting the smallest possible set of genes from the input data that can achieve good predictive accuracy for classification. In this work, we propose a two-stage selection algorithm for gene selection problems in microarray data-sets called the symmetrical uncertainty filter and harmony search algorithm wrapper (SU-HSA). Experimental results show that the SU-HSA is better than HSA in isolation for all data-sets in terms of the accuracy and achieves a lower number of genes on 6 out of 10 instances. Furthermore, the comparison with state-of-the-art methods shows that our proposed approach is able to obtain 5 (out of 10) new best results in terms of the number of selected genes and competitive results in terms of the classification accuracy.

  10. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm. PMID:27610308

  11. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    PubMed

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  12. Adaptively resizing populations: Algorithm, analysis, and first results

    NASA Technical Reports Server (NTRS)

    Smith, Robert E.; Smuda, Ellen

    1993-01-01

    Deciding on an appropriate population size for a given Genetic Algorithm (GA) application can often be critical to the algorithm's success. Too small, and the GA can fall victim to sampling error, affecting the efficacy of its search. Too large, and the GA wastes computational resources. Although advice exists for sizing GA populations, much of this advice involves theoretical aspects that are not accessible to the novice user. An algorithm for adaptively resizing GA populations is suggested. This algorithm is based on recent theoretical developments that relate population size to schema fitness variance. The suggested algorithm is developed theoretically, and simulated with expected value equations. The algorithm is then tested on a problem where population sizing can mislead the GA. The work presented suggests that the population sizing algorithm may be a viable way to eliminate the population sizing decision from the application of GA's.

  13. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Chao; Bao, Wan-Su; Wang, Xiang; Fu, Xiang-Qun

    2015-06-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002).

  14. Parallel computation of GA search for the artery shape determinants with CFD

    NASA Astrophysics Data System (ADS)

    Himeno, M.; Noda, S.; Fukasaku, K.; Himeno, R.

    2010-06-01

    We studied which factors play important role to determine the shape of arteries at the carotid artery bifurcation by performing multi-objective optimization with computation fluid dynamics (CFD) and the genetic algorithm (GA). To perform it, the most difficult problem is how to reduce turn-around time of the GA optimization with 3D unsteady computation of blood flow. We devised two levels of parallel computation method with the following features: level 1: parallel CFD computation with appropriate number of cores; level 2: parallel jobs generated by "master", which finds quickly available job cue and dispatches jobs, to reduce turn-around time. As a result, the turn-around time of one GA trial, which would have taken 462 days with one core, was reduced to less than two days on RIKEN supercomputer system, RICC, with 8192 cores. We performed a multi-objective optimization to minimize the maximum mean WSS and to minimize the sum of circumference for four different shapes and obtained a set of trade-off solutions for each shape. In addition, we found that the carotid bulb has the feature of the minimum local mean WSS and minimum local radius. We confirmed that our method is effective for examining determinants of artery shapes.

  15. An almost-parameter-free harmony search algorithm for groundwater pollution source identification.

    PubMed

    Jiang, Simin; Zhang, Yali; Wang, Pei; Zheng, Maohui

    2013-01-01

    The spatiotemporal characterization of unknown sources of groundwater pollution is frequently encountered in environmental problems. This study adopts a simulation-optimization approach that combines a contaminant transport simulation model with a heuristic harmony search algorithm to identify unknown pollution sources. In the proposed methodology, an almost-parameter-free harmony search algorithm is developed. The performance of this methodology is evaluated on an illustrative groundwater pollution source identification problem, and the identified results indicate that the proposed almost-parameter-free harmony search algorithm-based optimization model can give satisfactory estimations, even when the irregular geometry, erroneous monitoring data, and prior information shortage of potential locations are considered.

  16. Review of tandem repeat search tools: a systematic approach to evaluating algorithmic performance.

    PubMed

    Lim, Kian Guan; Kwoh, Chee Keong; Hsu, Li Yang; Wirawan, Adrianto

    2013-01-01

    The prevalence of tandem repeats in eukaryotic genomes and their association with a number of genetic diseases has raised considerable interest in locating these repeats. Over the last 10-15 years, numerous tools have been developed for searching tandem repeats, but differences in the search algorithms adopted and difficulties with parameter settings have confounded many users resulting in widely varying results. In this review, we have systematically separated the algorithmic aspect of the search tools from the influence of the parameter settings. We hope that this will give a better understanding of how the tools differ in algorithmic performance, their inherent constraints and how one should approach in evaluating and selecting them.

  17. Experimental implementation of a quantum random-walk search algorithm using strongly dipolar coupled spins

    SciTech Connect

    Lu Dawei; Peng Xinhua; Du Jiangfeng; Zhu Jing; Zou Ping; Yu Yihua; Zhang Shanmin; Chen Qun

    2010-02-15

    An important quantum search algorithm based on the quantum random walk performs an oracle search on a database of N items with O({radical}(phN)) calls, yielding a speedup similar to the Grover quantum search algorithm. The algorithm was implemented on a quantum information processor of three-qubit liquid-crystal nuclear magnetic resonance (NMR) in the case of finding 1 out of 4, and the diagonal elements' tomography of all the final density matrices was completed with comprehensible one-dimensional NMR spectra. The experimental results agree well with the theoretical predictions.

  18. A Hashing-Based Search Algorithm for Coding Digital Images by Vector Quantization

    NASA Astrophysics Data System (ADS)

    Chu, Chen-Chau

    1989-11-01

    This paper describes a fast algorithm to compress digital images by vector quantization. Vector quantization relies heavily on searching to build codebooks and to classify blocks of pixels into code indices. The proposed algorithm uses hashing, localized search, and multi-stage search to accelerate the searching process. The average of pixel values in a block is used as the feature for hashing and intermediate screening. Experimental results using monochrome images are presented. This algorithm compares favorably with other methods with regard to processing time, and has comparable or better mean square error measurements than some of them. The major advantages of the proposed algorithm are its speed, good quality of the reconstructed images, and flexibility.

  19. Parallel algorithms for unconstrained optimization by multisplitting with inexact subspace search - the abstract

    SciTech Connect

    Renaut, R.; He, Q.

    1994-12-31

    In a new parallel iterative algorithm for unconstrained optimization by multisplitting is proposed. In this algorithm the original problem is split into a set of small optimization subproblems which are solved using well known sequential algorithms. These algorithms are iterative in nature, e.g. DFP variable metric method. Here the authors use sequential algorithms based on an inexact subspace search, which is an extension to the usual idea of an inexact fine search. Essentially the idea of the inexact line search for nonlinear minimization is that at each iteration the authors only find an approximate minimum in the line search direction. Hence by inexact subspace search, they mean that, instead of finding the minimum of the subproblem at each interation, they do an incomplete down hill search to give an approximate minimum. Some convergence and numerical results for this algorithm will be presented. Further, the original theory will be generalized to the situation with a singular Hessian. Applications for nonlinear least squares problems will be presented. Experimental results will be presented for implementations on an Intel iPSC/860 Hypercube with 64 nodes as well as on the Intel Paragon.

  20. Algorithms for database-dependent search of MS/MS data.

    PubMed

    Matthiesen, Rune

    2013-01-01

    The frequent used bottom-up strategy for identification of proteins and their associated modifications generate nowadays typically thousands of MS/MS spectra that normally are matched automatically against a protein sequence database. Search engines that take as input MS/MS spectra and a protein sequence database are referred as database-dependent search engines. Many programs both commercial and freely available exist for database-dependent search of MS/MS spectra and most of the programs have excellent user documentation. The aim here is therefore to outline the algorithm strategy behind different search engines rather than providing software user manuals. The process of database-dependent search can be divided into search strategy, peptide scoring, protein scoring, and finally protein inference. Most efforts in the literature have been put in to comparing results from different software rather than discussing the underlining algorithms. Such practical comparisons can be cluttered by suboptimal implementation and the observed differences are frequently caused by software parameters settings which have not been set proper to allow even comparison. In other words an algorithmic idea can still be worth considering even if the software implementation has been demonstrated to be suboptimal. The aim in this chapter is therefore to split the algorithms for database-dependent searching of MS/MS data into the above steps so that the different algorithmic ideas become more transparent and comparable. Most search engines provide good implementations of the first three data analysis steps mentioned above, whereas the final step of protein inference are much less developed for most search engines and is in many cases performed by an external software. The final part of this chapter illustrates how protein inference is built into the VEMS search engine and discusses a stand-alone program SIR for protein inference that can import a Mascot search result.

  1. Quantifying the Influence of Search Algorithm Uncertainty on the Estimated Parameters of Environmental Models

    NASA Astrophysics Data System (ADS)

    Aziz, S.; Matott, L.

    2012-12-01

    The uncertain parameters of a given environmental model are often inferred from an optimization procedure that seeks to minimize discrepancies between simulated output and observed data. However, optimization search procedures can potentially yield different results across multiple calibration trials. For example, global search procedures like the genetic algorithm and simulated annealing are driven by inherent randomness that can result in variable inter-trial behavior. Despite this potential for variability in search algorithm performance, practitioners are reluctant to run multiple trials of an algorithm because of the added computational burden. As a result, estimated parameters are subject to an unrecognized source of uncertainty that could potentially bias or contaminate subsequent predictive analyses. In this study, a series of numerical experiments were performed to explore the influence of search algorithm uncertainty on parameter estimates. The experiments applied multiple trials of the simulated annealing algorithm to a suite of calibration problems involving watershed rainfall-runoff, groundwater flow, and subsurface contaminant transport. Results suggest that linking the simulated annealing algorithm with an adaptive range-reduction technique can significantly improve algorithm effectiveness while simultaneously reducing inter-trial variability. Therefore these range-reduction procedures appear to be a suitable mechanism for minimizing algorithm variance and improving the consistency of parameter estimates.

  2. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm.

    PubMed

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938

  3. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm.

    PubMed

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.

  4. Data classification using metaheuristic Cuckoo Search technique for Levenberg Marquardt back propagation (CSLM) algorithm

    NASA Astrophysics Data System (ADS)

    Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.

    2015-05-01

    A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.

  5. Improved Exact Enumerative Algorithms for the Planted (l, d)-Motif Search Problem.

    PubMed

    Tanaka, Shunji

    2014-01-01

    In this paper efficient exact algorithms are proposed for the planted ( l, d)-motif search problem. This problem is to find all motifs of length l that are planted in each input string with at most d mismatches. The "quorum" version of this problem is also treated in this paper to find motifs planted not in all input strings but in at least q input strings. The proposed algorithms are based on the previous algorithms called qPMSPruneI and qPMS7 that traverse a search tree starting from a l-length substring of an input string. To improve these previous algorithms, several techniques are introduced, which contribute to reducing the computation time for the traversal. In computational experiments, it will be shown that the proposed algorithms outperform the previous algorithms.

  6. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm

    PubMed Central

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938

  7. An effective hybrid cuckoo search and genetic algorithm for constrained engineering design optimization

    NASA Astrophysics Data System (ADS)

    Kanagaraj, G.; Ponnambalam, S. G.; Jawahar, N.; Mukund Nilakantan, J.

    2014-10-01

    This article presents an effective hybrid cuckoo search and genetic algorithm (HCSGA) for solving engineering design optimization problems involving problem-specific constraints and mixed variables such as integer, discrete and continuous variables. The proposed algorithm, HCSGA, is first applied to 13 standard benchmark constrained optimization functions and subsequently used to solve three well-known design problems reported in the literature. The numerical results obtained by HCSGA show competitive performance with respect to recent algorithms for constrained design optimization problems.

  8. Genetic algorithms in conceptual design of a light-weight, low-noise, tilt-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Wells, Valana L.

    1996-01-01

    This report outlines research accomplishments in the area of using genetic algorithms (GA) for the design and optimization of rotorcraft. It discusses the genetic algorithm as a search and optimization tool, outlines a procedure for using the GA in the conceptual design of helicopters, and applies the GA method to the acoustic design of rotors.

  9. An Upperbound to the Performance of Ranked-Output Searching: Optimal Weighting of Query Terms Using A Genetic Algorithm.

    ERIC Educational Resources Information Center

    Robertson, Alexander M.; Willett, Peter

    1996-01-01

    Describes a genetic algorithm (GA) that assigns weights to query terms in a ranked-output document retrieval system. Experiments showed the GA often found weights slightly superior to those produced by deterministic weighting (F4). Many times, however, the two methods gave the same results and sometimes the F4 results were superior, indicating…

  10. Evaluation of mass spectral library search algorithms implemented in commercial software.

    PubMed

    Samokhin, Andrey; Sotnezova, Ksenia; Lashin, Vitaly; Revelsky, Igor

    2015-06-01

    Performance of several library search algorithms (against EI mass spectral databases) implemented in commercial software products ( acd/specdb, chemstation, gc/ms solution and ms search) was estimated. Test set contained 1000 mass spectra, which were randomly selected from NIST'08 (RepLib) mass spectral database. It was shown that composite (also known as identity) algorithm implemented in ms search (NIST) software gives statistically the best results: the correct compound occupied the first position in the list of possible candidates in 81% of cases; the correct compound was within the list of top ten candidates in 98% of cases. It was found that use of presearch option can lead to rejection of the correct answer from the list of possible candidates (therefore presearch option should not be used, if possible). Overall performance of library search algorithms was estimated using receiver operating characteristic curves.

  11. GA-optimization for rapid prototype system demonstration

    NASA Technical Reports Server (NTRS)

    Kim, Jinwoo; Zeigler, Bernard P.

    1994-01-01

    An application of the Genetic Algorithm (GA) is discussed. A novel scheme of Hierarchical GA was developed to solve complicated engineering problems which require optimization of a large number of parameters with high precision. High level GAs search for few parameters which are much more sensitive to the system performance. Low level GAs search in more detail and employ a greater number of parameters for further optimization. Therefore, the complexity of the search is decreased and the computing resources are used more efficiently.

  12. A Globally Convergent Augmented Lagrangian Pattern Search Algorithm for Optimization with General Constraints and Simple Bounds

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Torczon, Virginia

    1998-01-01

    We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.

  13. Parameter estimation for chaotic systems using the cuckoo search algorithm with an orthogonal learning method

    NASA Astrophysics Data System (ADS)

    Li, Xiang-Tao; Yin, Ming-Hao

    2012-05-01

    We study the parameter estimation of a nonlinear chaotic system, which can be essentially formulated as a multidimensional optimization problem. In this paper, an orthogonal learning cuckoo search algorithm is used to estimate the parameters of chaotic systems. This algorithm can combine the stochastic exploration of the cuckoo search and the exploitation capability of the orthogonal learning strategy. Experiments are conducted on the Lorenz system and the Chen system. The proposed algorithm is used to estimate the parameters for these two systems. Simulation results and comparisons demonstrate that the proposed algorithm is better or at least comparable to the particle swarm optimization and the genetic algorithm when considering the quality of the solutions obtained.

  14. The Successive Projection Algorithm (SPA), an Algorithm with a Spatial Constraint for the Automatic Search of Endmembers in Hyperspectral Data

    PubMed Central

    Zhang, Jinkai; Rivard, Benoit; Rogge, D.M.

    2008-01-01

    Spectral mixing is a problem inherent to remote sensing data and results in few image pixel spectra representing ″pure″ targets. Linear spectral mixture analysis is designed to address this problem and it assumes that the pixel-to-pixel variability in a scene results from varying proportions of spectral endmembers. In this paper we present a different endmember-search algorithm called the Successive Projection Algorithm (SPA). SPA builds on convex geometry and orthogonal projection common to other endmember search algorithms by including a constraint on the spatial adjacency of endmember candidate pixels. Consequently it can reduce the susceptibility to outlier pixels and generates realistic endmembers.This is demonstrated using two case studies (AVIRIS Cuprite cube and Probe-1 imagery for Baffin Island) where image endmembers can be validated with ground truth data. The SPA algorithm extracts endmembers from hyperspectral data without having to reduce the data dimensionality. It uses the spectral angle (alike IEA) and the spatial adjacency of pixels in the image to constrain the selection of candidate pixels representing an endmember. We designed SPA based on the observation that many targets have spatial continuity (e.g. bedrock lithologies) in imagery and thus a spatial constraint would be beneficial in the endmember search. An additional product of the SPA is data describing the change of the simplex volume ratio between successive iterations during the endmember extraction. It illustrates the influence of a new endmember on the data structure, and provides information on the convergence of the algorithm. It can provide a general guideline to constrain the total number of endmembers in a search.

  15. Local Search Methods for Tree Chromosome Structure in a GA to Identify Functions

    NASA Astrophysics Data System (ADS)

    Matayoshi, Mitsukuni; Nakamura, Morikazu; Miyagi, Hayao

    In this paper, Local search methods for “Tree Chromosome Structure in a Genetic Algorithm to Identify Functions" which succeeds in function identifications are proposed. The proposed method aims at the identification success rate improvement and shortening identification time. The target functions of identification are composed of algebraic functions, primary transcendental functions, time series functions include a chaos function, and user-defined one-variable funcions. In testing, Kepler's the third law is added to Matayoshi's test functions(7)-(9). When some functions are identified, the improvement of identification rate and shortening time are indicated. However, we also report some ineffectual results, and give considerations.

  16. Computational identification of human long intergenic non-coding RNAs using a GA-SVM algorithm.

    PubMed

    Wang, Yanqiu; Li, Yang; Wang, Qi; Lv, Yingli; Wang, Shiyuan; Chen, Xi; Yu, Xuexin; Jiang, Wei; Li, Xia

    2014-01-01

    Long intergenic non-coding RNAs (lincRNAs) are a new type of non-coding RNAs and are closely related with the occurrence and development of diseases. In previous studies, most lincRNAs have been identified through next-generation sequencing. Because lincRNAs exhibit tissue-specific expression, the reproducibility of lincRNA discovery in different studies is very poor. In this study, not including lincRNA expression, we used the sequence, structural and protein-coding potential features as potential features to construct a classifier that can be used to distinguish lincRNAs from non-lincRNAs. The GA-SVM algorithm was performed to extract the optimized feature subset. Compared with several feature subsets, the five-fold cross validation results showed that this optimized feature subset exhibited the best performance for the identification of human lincRNAs. Moreover, the LincRNA Classifier based on Selected Features (linc-SF) was constructed by support vector machine (SVM) based on the optimized feature subset. The performance of this classifier was further evaluated by predicting lincRNAs from two independent lincRNA sets. Because the recognition rates for the two lincRNA sets were 100% and 99.8%, the linc-SF was found to be effective for the prediction of human lincRNAs.

  17. Heuristic optimization methods for run-time intensive models (Dynamically Dimensioned Search, Particle Swarm Optimization, GA) - a comparison of performance and parallel implementation using R

    NASA Astrophysics Data System (ADS)

    Francke, Till; Bronster, Axel; Shoemaker, Christine A.

    2010-05-01

    Calibrating complex hydrological models faces two major challenges: firstly, extended models, especially when spatially distributed, encompass a large number of parameters with different (and possibly a-priori unknown) sensitivity. Due to the usually rough surface of the objective function, this aggravates the risk of an algorithm to converge in a local optimum. Thus, gradient-based optimization methods are often bound to fail without a very good prior estimate. Secondly, despite growing computational power, it is not uncommon that models of large extent in space or time take several minutes to run, which severely restricts the total number of model evaluations under given computational and time resources. While various heuristic methods successfully address the first challenge, they tend to conflict with the second challenge due to the increased number of evaluations necessary. In that context we analyzed three methods (Dynamically Dimensioned Search / DDS, Particle Swarm Optimization / PSO, Genetic Algorithms /GA). We performed tests with common "synthetic" objective functions and a calibration of the hydrological model WASA-SED with different number of parameters. When looking at the reduction of the objective function within few (i.e.< 1000) evaluations, the methods generally perform in the order (best to worst) DDS-PSO-GA. Only at a larger number, GA can excel. To speed up optimization, we executed DDS and PSO as parallel applications, i.e. using multiple CPUs and/or computers. The parallelisation has been implemented in the ppso-package for the free computation environment R. Special focus has been laid onto the options to resume interrupted optimization runs and visualize progress.

  18. Structural evolution and electronic properties of medium-sized gallium clusters from ab initio genetic algorithm search.

    PubMed

    Sai, Linwei; Zhao, Jijun; Huang, Xiaoming; Wang, Jun

    2012-01-01

    Using genetic algorithm incorporated with density functional theory, we have explored the size evolution of structural and electronic properties of neutral gallium clusters of 20-40 atoms in terms of their ground state structures, binding energies, second differences of energy, HOMO-LUMO gaps, distributions of bond length and bond angle, and electron density of states. In the size range studied, the Ga(n) clusters exhibit several growth patterns, and the core-shell structures become dominant from Ga31. With high point group symmetries, Ga23 and Ga36 show particularly high stability and Ga36 owns a large HOMO-LUMO gap. The atomic structures and electronic states of Ga(n) clusters significantly differ from the a solid but resemble beta solid and liquid to certain extent.

  19. SAGA: a hybrid search algorithm for Bayesian Network structure learning of transcriptional regulatory networks.

    PubMed

    Adabor, Emmanuel S; Acquaah-Mensah, George K; Oduro, Francis T

    2015-02-01

    Bayesian Networks have been used for the inference of transcriptional regulatory relationships among genes, and are valuable for obtaining biological insights. However, finding optimal Bayesian Network (BN) is NP-hard. Thus, heuristic approaches have sought to effectively solve this problem. In this work, we develop a hybrid search method combining Simulated Annealing with a Greedy Algorithm (SAGA). SAGA explores most of the search space by undergoing a two-phase search: first with a Simulated Annealing search and then with a Greedy search. Three sets of background-corrected and normalized microarray datasets were used to test the algorithm. BN structure learning was also conducted using the datasets, and other established search methods as implemented in BANJO (Bayesian Network Inference with Java Objects). The Bayesian Dirichlet Equivalence (BDe) metric was used to score the networks produced with SAGA. SAGA predicted transcriptional regulatory relationships among genes in networks that evaluated to higher BDe scores with high sensitivities and specificities. Thus, the proposed method competes well with existing search algorithms for Bayesian Network structure learning of transcriptional regulatory networks.

  20. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum

    PubMed Central

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents’ positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  1. An Experience Oriented-Convergence Improved Gravitational Search Algorithm for Minimum Variance Distortionless Response Beamforming Optimum.

    PubMed

    Darzi, Soodabeh; Tiong, Sieh Kiong; Tariqul Islam, Mohammad; Rezai Soleymanpour, Hassan; Kibria, Salehin

    2016-01-01

    An experience oriented-convergence improved gravitational search algorithm (ECGSA) based on two new modifications, searching through the best experiments and using of a dynamic gravitational damping coefficient (α), is introduced in this paper. ECGSA saves its best fitness function evaluations and uses those as the agents' positions in searching process. In this way, the optimal found trajectories are retained and the search starts from these trajectories, which allow the algorithm to avoid the local optimums. Also, the agents can move faster in search space to obtain better exploration during the first stage of the searching process and they can converge rapidly to the optimal solution at the final stage of the search process by means of the proposed dynamic gravitational damping coefficient. The performance of ECGSA has been evaluated by applying it to eight standard benchmark functions along with six complicated composite test functions. It is also applied to adaptive beamforming problem as a practical issue to improve the weight vectors computed by minimum variance distortionless response (MVDR) beamforming technique. The results of implementation of the proposed algorithm are compared with some well-known heuristic methods and verified the proposed method in both reaching to optimal solutions and robustness. PMID:27399904

  2. First experimental demonstration of an exact quantum search algorithm in nuclear magnetic resonance system

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Zhang, FeiHao

    2015-07-01

    The success probability of searching an objective item from an unsorted database using standard Grover's algorithm is usually not exactly 1. It is exactly 1 only when it is used to find the target state from a database with four items. Exact search is always important in theoretical and practical applications. The failure rate of Grover's algorithm becomes big when the database is small, and this hinders the use of the commonly used divide-and-verify strategy. Even for large database, the failure rate becomes considerably large when there are many marked items. This has put a serious limitation on the usability of the Grover's algorithm. An important improved version of the Grover's algorithm, also known as the improved Grover algorithm, solves this problem. The improved Grover algorithm searches arbitrary number of target states from an unsorted database with full success rate. Here, we give the first experimental realization of the improved Grover algorithm, which finds a marked state with certainty, in a nuclear magnetic resonance system. The optimal control theory is used to obtain an optimized control sequence. The experimental results agree well with the theoretical predictions.

  3. SKA pulsar search: technological challenges and best algorithms development

    NASA Astrophysics Data System (ADS)

    Baffa, C.

    2014-08-01

    One of the key scientific projects of the SKA radio telescope is a large survey for pulsars both in isolated and binary systems. The data rate of the pulsar search engine is expected to reach 0.6TeraSamples/sec. For the purposes of extracting hidden pulses from these streams, we need a complex search strategy which allows us to explore a three dimensional parameter space and it requires approximately 10PetaFlops. This problem is well suited for a parallel computing engine, but the dimensions of SKA bring this problem to a new level of complexity. An up-to-date study shows that this operation would require more than 2000 GPUs. In this report we will present possible mitigation strategies.

  4. Using Geostationary Communications Satellites as a Sensor: Telemetry Search Algorithms

    NASA Astrophysics Data System (ADS)

    Cahoy, K.; Carlton, A.; Lohmeyer, W. Q.

    2014-12-01

    For decades, operators and manufacturers have collected large amounts of telemetry from geostationary (GEO) communications satellites to monitor system health and performance, yet this data is rarely mined for scientific purposes. The goal of this work is to mine data archives acquired from commercial operators using new algorithms that can detect when a space weather (or non-space weather) event of interest has occurred or is in progress. We have developed algorithms to statistically analyze power amplifier current and temperature telemetry and identify deviations from nominal operations or other trends of interest. We then examine space weather data to see what role, if any, it might have played. We also closely examine both long and short periods of time before an anomaly to determine whether or not the anomaly could have been predicted.

  5. Feature selection method based on multi-fractal dimension and harmony search algorithm and its application

    NASA Astrophysics Data System (ADS)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na

    2016-10-01

    Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.

  6. Review of tandem repeat search tools: a systematic approach to evaluating algorithmic performance.

    PubMed

    Lim, Kian Guan; Kwoh, Chee Keong; Hsu, Li Yang; Wirawan, Adrianto

    2013-01-01

    The prevalence of tandem repeats in eukaryotic genomes and their association with a number of genetic diseases has raised considerable interest in locating these repeats. Over the last 10-15 years, numerous tools have been developed for searching tandem repeats, but differences in the search algorithms adopted and difficulties with parameter settings have confounded many users resulting in widely varying results. In this review, we have systematically separated the algorithmic aspect of the search tools from the influence of the parameter settings. We hope that this will give a better understanding of how the tools differ in algorithmic performance, their inherent constraints and how one should approach in evaluating and selecting them. PMID:22648964

  7. Parallel algorithms and architectures for very fast AI search

    SciTech Connect

    Gu, J.

    1989-01-01

    A wide range of problems in natural and artificial intelligence, computer vision, computer graphics, database engineering, operations research, symbolic logic, robot manipulation and hardware design automation are special cases of Consistent Labeling Problems (CLP). CLP has long been viewed as an efficient computational model based on a unit constraint relation containing 2N-tuples of units and labels which specifies which N-tuples of labels are compatible with which N-tuples of units. Due to high computation cost and design complexity, most currently best-known algorithms and computer architectures have usually proven infeasible for solving the consistent labeling problems. Efficiency in CLP computation during the last decade has only been improved a few times. This research presents several parallel algorithms and computer architectures for solving CLP within a parallel processing framework. For problems of practical interest, 4 to 10 orders of magnitude of efficiency improvement can be easily reached. Several simple wafer scale computer architectures are given which implement these parallel algorithms at a surprisingly low cost.

  8. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy.

  9. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization

    PubMed Central

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164

  10. Cuckoo Search Algorithm Based on Repeat-Cycle Asymptotic Self-Learning and Self-Evolving Disturbance for Function Optimization.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Song, Jiang-di

    2015-01-01

    In order to improve convergence velocity and optimization accuracy of the cuckoo search (CS) algorithm for solving the function optimization problems, a new improved cuckoo search algorithm based on the repeat-cycle asymptotic self-learning and self-evolving disturbance (RC-SSCS) is proposed. A disturbance operation is added into the algorithm by constructing a disturbance factor to make a more careful and thorough search near the bird's nests location. In order to select a reasonable repeat-cycled disturbance number, a further study on the choice of disturbance times is made. Finally, six typical test functions are adopted to carry out simulation experiments, meanwhile, compare algorithms of this paper with two typical swarm intelligence algorithms particle swarm optimization (PSO) algorithm and artificial bee colony (ABC) algorithm. The results show that the improved cuckoo search algorithm has better convergence velocity and optimization accuracy. PMID:26366164

  11. An effective hybrid cuckoo search algorithm with improved shuffled frog leaping algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  12. An effective hybrid cuckoo search algorithm with improved shuffled frog leaping algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm.

  13. An Effective Hybrid Cuckoo Search Algorithm with Improved Shuffled Frog Leaping Algorithm for 0-1 Knapsack Problems

    PubMed Central

    Wang, Gai-Ge; Feng, Qingjiang; Zhao, Xiang-Jun

    2014-01-01

    An effective hybrid cuckoo search algorithm (CS) with improved shuffled frog-leaping algorithm (ISFLA) is put forward for solving 0-1 knapsack problem. First of all, with the framework of SFLA, an improved frog-leap operator is designed with the effect of the global optimal information on the frog leaping and information exchange between frog individuals combined with genetic mutation with a small probability. Subsequently, in order to improve the convergence speed and enhance the exploitation ability, a novel CS model is proposed with considering the specific advantages of Lévy flights and frog-leap operator. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Finally, numerical simulations are carried out on six different types of 0-1 knapsack instances, and the comparative results have shown the effectiveness of the proposed algorithm and its ability to achieve good quality solutions, which outperforms the binary cuckoo search, the binary differential evolution, and the genetic algorithm. PMID:25404940

  14. Performance analysis for the expanding search PN acquisition algorithm. [pseudonoise in spread spectrum transmission

    NASA Technical Reports Server (NTRS)

    Braun, W. R.

    1982-01-01

    An approach is described for approximating the cumulative probability distribution of the acquisition time of the serial pseudonoise (PN) search algorithm. The results are applicable to both variable and fixed dwell time systems. The theory is developed for the case where some a priori information is available on the PN code epoch (reacquisition problem or acquisition of very long codes). Also considered is the special case of a search over the whole code. The accuracy of the approximation is demonstrated by comparisons with published exact results for the fixed dwell time algorithm.

  15. Genetic algorithm and the application for job shop group scheduling

    NASA Astrophysics Data System (ADS)

    Mao, Jianzhong; Wu, Zhiming

    1995-08-01

    Genetic algorithm (GA) is a heuristic and random search technique mimicking nature. This paper first presents the basic principle of GA, the definition and the function of the genetic operators, and the principal character of GA. On the basis of these, the paper proposes using GA as a new solution method of the job-shop group scheduling problem, discusses the coded representation method of the feasible solution, and the particular limitation to the genetic operators.

  16. The Effect of Peptide Identification Search Algorithms on MS2-Based Label-Free Protein Quantification

    PubMed Central

    Degroeve, Sven; Staes, An; De Bock, Pieter-Jan

    2012-01-01

    Abstract Several approaches exist for the quantification of proteins in complex samples processed by liquid chromatography-mass spectrometry followed by fragmentation analysis (MS2). One of these approaches is label-free MS2-based quantification, which takes advantage of the information computed from MS2 spectrum observations to estimate the abundance of a protein in a sample. As a first step in this approach, fragmentation spectra are typically matched to the peptides that generated them by a search algorithm. Because different search algorithms identify overlapping but non-identical sets of peptides, here we investigate whether these differences in peptide identification have an impact on the quantification of the proteins in the sample. We therefore evaluated the effect of using different search algorithms by examining the reproducibility of protein quantification in technical repeat measurements of the same sample. From our results, it is clear that a search engine effect does exist for MS2-based label-free protein quantification methods. As a general conclusion, it is recommended to address the overall possibility of search engine-induced bias in the protein quantification results of label-free MS2-based methods by performing the analysis with two or more distinct search engines. PMID:22804230

  17. Algorithms for Lunar Flash Video Search, Measurement, and Archiving

    NASA Technical Reports Server (NTRS)

    Swift, Wesley; Suggs, Robert; Cooke, Bill

    2007-01-01

    Lunar meteoroid impact flashes provide a method to estimate the flux of the large meteoroid flux and thus their hazard to spacecraft. Although meteoroid impacts on the Moon have been detected using video methods for over a decade, the difficulty of manually searching hours of video for the rare, extremely brief impact flashes has discouraged the technique's systematic implementation. A prototype has been developed for the purpose of automatically searching lunar video records for impact flashes, eliminating false detections, editing the returned possible flashes, Z and archiving and documenting the results. The theory and organization of the program is discussed with emphasis on the filtering out of several classes of false detections and retaining the brief portions of the raw video necessary for in depth analysis of the flashes detected. Several utilities for measurement, analysis, and location of the flashes on the moon included in the program are demonstrated. Application of the program to a year's worth of lunar observations is discussed along with examples of impact flashes as well as several classes of false impact flashes.

  18. Algorithms for Lunar Flash Video Search, Measurement, and Archiving

    NASA Technical Reports Server (NTRS)

    Swift, Wesley; Suggs, Robert; Cooke, William

    2007-01-01

    Lunar meteoroid impact flashes provide a method to estimate the flux of the large meteoroid flux and thus their hazard to spacecraft. Although meteoroid impacts on the Moon have been detected using video methods for over a decade, the difficulty of manually searching hours of video for the rare, extremely brief impact flashes has discouraged the technique's systematic implementation. A prototype has been developed for the purpose of automatically searching Lunar video records for impact flashes, eliminating false detections, editing the returned possible flashes, and archiving and documenting the results. The theory and organization of the program is discussed with emphasis on the filtering out of several classes of false detections and retaining the brief portions of the raw video necessary for in depth analysis of the flashes detected. Several utilities for measurement, analysis, and location of the flashes on the moon included in the program are demonstrated. Application of the program to a year's worth of Lunar observations is discussed along with examples of impact flashes as well as several classes of false impact flashes.

  19. LC-Grid: a linear global contact search algorithm for finite element analysis

    NASA Astrophysics Data System (ADS)

    Chen, Hu; Lei, Zhou; Zang, Mengyan

    2014-11-01

    The contact searching is computationally intensive and its memory requirement is highly demanding; therefore, it is significant to develop an efficient contact search algorithm with less memory required. In this paper, we propose an efficient global contact search algorithm with linear complexity in terms of computational cost and memory requirement for the finite element analysis of contact problems. This algorithm is named LC-Grid (Lei devised the algorithm and Chen implemented it). The contact space is decomposed; thereafter, all contact nodes and segments are firstly mapped onto layers, then onto rows and lastly onto cells. In each mapping level, the linked-list technique is used for the efficient storing and retrieval of contact nodes and segments. The contact detection is performed in each non-empty cell along non-empty rows in each non-empty layer, and moves to the next non-empty layer once a layer is completed. The use of migration strategy makes the algorithm insensitive to mesh size. The properties of this algorithm are investigated and numerically verified to be linearly proportional to the number of contact segments. Besides, the ideal ranges of two significant scale factors of cell size and buffer zone which strongly affect computational efficiency are determined via an illustrative example.

  20. Improvement of Service Searching Algorithm in the JVO Portal Site

    NASA Astrophysics Data System (ADS)

    Eguchi, S.; Shirasak, Y.; Komiya, Y.; Ohishi, M.; Mizumoto, Y.; Ishihara, Y.; Tsutsumi, J.; Hiyama, T.; Nakamoto, H.; Sakamoto, M.

    2012-09-01

    The Virtual Observatory (VO) consists of a huge amount of astronomical databases which contain both of theoretical and observational data obtained with various methods, telescopes, and instruments. Since VO provides raw and processed observational data, astronomers can concentrate themselves on their scientific interests without awareness of instruments; all they have to know is which service provides their interested data. On the other hand, services on the VO system will be better used if queries can be made by means of telescopes, wavelengths, and object types; currently it is difficult for newcomers to find desired ones. We have recently started a project towards improving the data service functionality and usability on the Japanese VO (JVO) portal site. We are now working on implementation of a function to automatically classify all services on VO in terms of telescopes and instruments without referring to the facility and instrument keywords, which are not always filled in most cases. In the paper, we report a new algorithm towards constructing the facility and instrument keywords from other information of a service, and discuss its effectiveness. We also propose a new user interface of the portal site with this algorithm.

  1. Correspondence: Searching sequence space

    SciTech Connect

    Youvan, D.C.

    1995-08-01

    This correspondence debates the efficiency and application of genetic algorithms (GAs) to search protein sequence space. The important experimental point is that such sparse searches utilize physically realistic syntheses. In this regard, all GA-based technologies are very similar; they {open_quotes}learn{close_quotes} from their initial sparse search and then generate interesting new proteins within a few iterations. Which GA-based technology is best? That probably depends on the protein and the specific engineering goal. Given the fact that the field of combinatorial chemistry is still in its infancy, it is probably wise to consider all of the proven mutagenesis methods. 19 refs.

  2. Novel back propagation optimization by Cuckoo Search algorithm.

    PubMed

    Yi, Jiao-hong; Xu, Wei-hong; Chen, Yuan-tao

    2014-01-01

    The traditional Back Propagation (BP) has some significant disadvantages, such as training too slowly, easiness to fall into local minima, and sensitivity of the initial weights and bias. In order to overcome these shortcomings, an improved BP network that is optimized by Cuckoo Search (CS), called CSBP, is proposed in this paper. In CSBP, CS is used to simultaneously optimize the initial weights and bias of BP network. Wine data is adopted to study the prediction performance of CSBP, and the proposed method is compared with the basic BP and the General Regression Neural Network (GRNN). Moreover, the parameter study of CSBP is conducted in order to make the CSBP implement in the best way. PMID:25028682

  3. Novel back propagation optimization by Cuckoo Search algorithm.

    PubMed

    Yi, Jiao-hong; Xu, Wei-hong; Chen, Yuan-tao

    2014-01-01

    The traditional Back Propagation (BP) has some significant disadvantages, such as training too slowly, easiness to fall into local minima, and sensitivity of the initial weights and bias. In order to overcome these shortcomings, an improved BP network that is optimized by Cuckoo Search (CS), called CSBP, is proposed in this paper. In CSBP, CS is used to simultaneously optimize the initial weights and bias of BP network. Wine data is adopted to study the prediction performance of CSBP, and the proposed method is compared with the basic BP and the General Regression Neural Network (GRNN). Moreover, the parameter study of CSBP is conducted in order to make the CSBP implement in the best way.

  4. In search of preferential flow paths in structured porous media using a simple genetic algorithm

    NASA Astrophysics Data System (ADS)

    Gwo, Jin-Ping

    2001-06-01

    Fracture network and preferential flow path images from exposed outcrops of geological formations, exposed soil pedon faces, and extracted soil columns and rock cores are often used to conceptualize and construct models to predict the fate and transport of subsurface contaminants. Both the scale resolutions inherent in these observations and the upscaling methods used to obtain macroscopic flow and transport parameters may result in uncertainties in the prediction. We present a mechanistic-based approach that utilizes a discrete fracture flow and transport model, a distributed and high performance computational architecture, and a genetic-based search algorithm to invert scale- representative, equivalent fracture networks or the preferential flow paths. Synthetic breakthrough curves (BTCs) and exposed structural information from known fracture networks in hypothetical soil columns are presented to the search algorithm. Using three genetic operators, a simple genetic algorithm (SGA) is able to invert the correct pictures of simple but not complex fracture networks. Solute transport experiments using soil columns often assume that the structure of soil columns is laterally uniform with respect to the macroscopic transport direction and the transport process is longitudinally one- dimensional. This assumption and the one BTC thus collected for each injection of solutes, even with flow interruptions, are not sufficient to guide the search algorithm toward the global optimum. Additional information (e.g., multiple solute BTCs along a cross section of the soil column) is necessary for the SGA to invert the correct fracture network. Three SGA population statistics, fracture network uncertainty, informatic entropy, and matrix-fracture contact area, are proposed to measure the uncertainty of SGA near optima. A positive correlation between the reduction of these statistics and the level of relevant information to better confine the SGA search space was found. The SGA search

  5. Investigation of candidate data structures and search algorithms to support a knowledge based fault diagnosis system

    NASA Technical Reports Server (NTRS)

    Bosworth, Edward L., Jr.

    1987-01-01

    The focus of this research is the investigation of data structures and associated search algorithms for automated fault diagnosis of complex systems such as the Hubble Space Telescope. Such data structures and algorithms will form the basis of a more sophisticated Knowledge Based Fault Diagnosis System. As a part of the research, several prototypes were written in VAXLISP and implemented on one of the VAX-11/780's at the Marshall Space Flight Center. This report describes and gives the rationale for both the data structures and algorithms selected. A brief discussion of a user interface is also included.

  6. Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Kasami, Tadao; Moorthy, H. T.

    1997-01-01

    This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.

  7. Search algorithm complexity modeling with application to image alignment and matching

    NASA Astrophysics Data System (ADS)

    DelMarco, Stephen

    2014-05-01

    Search algorithm complexity modeling, in the form of penetration rate estimation, provides a useful way to estimate search efficiency in application domains which involve searching over a hypothesis space of reference templates or models, as in model-based object recognition, automatic target recognition, and biometric recognition. The penetration rate quantifies the expected portion of the database that must be searched, and is useful for estimating search algorithm computational requirements. In this paper we perform mathematical modeling to derive general equations for penetration rate estimates that are applicable to a wide range of recognition problems. We extend previous penetration rate analyses to use more general probabilistic modeling assumptions. In particular we provide penetration rate equations within the framework of a model-based image alignment application domain in which a prioritized hierarchical grid search is used to rank subspace bins based on matching probability. We derive general equations, and provide special cases based on simplifying assumptions. We show how previously-derived penetration rate equations are special cases of the general formulation. We apply the analysis to model-based logo image alignment in which a hierarchical grid search is used over a geometric misalignment transform hypothesis space. We present numerical results validating the modeling assumptions and derived formulation.

  8. A Study on the Optimization Performance of Fireworks and Cuckoo Search Algorithms in Laser Machining Processes

    NASA Astrophysics Data System (ADS)

    Goswami, D.; Chakraborty, S.

    2014-11-01

    Laser machining is a promising non-contact process for effective machining of difficult-to-process advanced engineering materials. Increasing interest in the use of lasers for various machining operations can be attributed to its several unique advantages, like high productivity, non-contact processing, elimination of finishing operations, adaptability to automation, reduced processing cost, improved product quality, greater material utilization, minimum heat-affected zone and green manufacturing. To achieve the best desired machining performance and high quality characteristics of the machined components, it is extremely important to determine the optimal values of the laser machining process parameters. In this paper, fireworks algorithm and cuckoo search (CS) algorithm are applied for single as well as multi-response optimization of two laser machining processes. It is observed that although almost similar solutions are obtained for both these algorithms, CS algorithm outperforms fireworks algorithm with respect to average computation time, convergence rate and performance consistency.

  9. An Improved Greedy Search Algorithm for the Development of a Phonetically Rich Speech Corpus

    NASA Astrophysics Data System (ADS)

    Zhang, Jin-Song; Nakamura, Satoshi

    An efficient way to develop large scale speech corpora is to collect phonetically rich ones that have high coverage of phonetic contextual units. The sentence set, usually called as the minimum set, should have small text size in order to reduce the collection cost. It can be selected by a greedy search algorithm from a large mother text corpus. With the inclusion of more and more phonetic contextual effects, the number of different phonetic contextual units increased dramatically, making the search not a trivial issue. In order to improve the search efficiency, we previously proposed a so-called least-to-most-ordered greedy search based on the conventional algorithms. This paper evaluated these algorithms in order to show their different characteristics. The experimental results showed that the least-to-most-ordered methods successfully achieved smaller objective sets at significantly less computation time, when compared with the conventional ones. This algorithm has already been applied to the development a number of speech corpora, including a large scale phonetically rich Chinese speech corpus ATRPTH which played an important role in developing our multi-language translation system.

  10. Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection

    ERIC Educational Resources Information Center

    Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas

    2011-01-01

    Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…

  11. An algorithm for constructing and searching spaces of alternative hypotheses.

    PubMed

    Griffin, Christopher; Testa, Kelly; Racunas, Stephen

    2011-06-01

    In this paper, we develop techniques for automated hypothesis-space exploration over data sets that may contain contradictions. To do so, we make use of the equivalence between two formulations: those of first-order predicate logic with prefix modal quantifiers under the finite-model hypothesis and those of mixed-integer linear programming (MILP) problems. Unlike other approaches, we do not assume that all logical assertions are true without doubt. Instead, we look for alternative hypotheses about the validity of the claims by identifying alternative optimal solutions to a corresponding MILP. We use a collection of slack variables in the derived linear constraints to indicate the presence of contradictory data or assumptions. The objective is to minimize contradictions between data and assertions represented by the presence of nonzero slack in the set of linear constraints. In this paper, we present the following: 1) a correspondence between first-order predicate logic with modal quantifier prefixes under the finite-model hypothesis and MILP problems and 2) an implicit enumeration algorithm for exploring the contradiction hypothesis space. PMID:21147596

  12. Optimal vaccination schedule search using genetic algorithm over MPI technology

    PubMed Central

    2012-01-01

    Background Immunological strategies that achieve the prevention of tumor growth are based on the presumption that the immune system, if triggered before tumor onset, could be able to defend from specific cancers. In supporting this assertion, in the last decade active immunization approaches prevented some virus-related cancers in humans. An immunopreventive cell vaccine for the non-virus-related human breast cancer has been recently developed. This vaccine, called Triplex, targets the HER-2-neu oncogene in HER-2/neu transgenic mice and has shown to almost completely prevent HER-2/neu-driven mammary carcinogenesis when administered with an intensive and life-long schedule. Methods To better understand the preventive efficacy of the Triplex vaccine in reduced schedules we employed a computational approach. The computer model developed allowed us to test in silico specific vaccination schedules in the quest for optimality. Specifically here we present a parallel genetic algorithm able to suggest optimal vaccination schedule. Results & Conclusions The enormous complexity of combinatorial space to be explored makes this approach the only possible one. The suggested schedule was then tested in vivo, giving good results. Finally, biologically relevant outcomes of optimization are presented. PMID:23148787

  13. An Algorithm for Constructing and Searching Spaces of Alternative Hypotheses

    SciTech Connect

    Testa, Kelly M; Griffin, Christopher H

    2011-01-01

    In this paper, we develop techniques for automated hypothesis-space exploration over data sets that may contain contradictions. To do so, we make use of the equivalence between two formulations: those of first-order predicate logic with prefix modal quantifiers under the finite-model hypothesis and those of mixed-integer linear programming (MILP) problems. Unlike other approaches, we do not assume that all logical assertions are true without doubt. Instead, we look for alternative hypotheses about the validity of the claims by identifying alternative optimal solutions to a corresponding MILP. We use a collection of slack variables in the derived linear constraints to indicate the presence of contradictory data or assumptions. The objective is to minimize contradictions between data and assertions represented by the presence of nonzero slack in the set of linear constraints. In this paper, we present the following: 1) a correspondence between first-order predicate logic with modal quantifier prefixes under the finite-model hypothesis and MILP problems and 2) an implicit enumeration algorithm for exploring the contradiction hypothesis space.

  14. An adaptive immune optimization algorithm with dynamic lattice searching operation for fast optimization of atomic clusters

    NASA Astrophysics Data System (ADS)

    Wu, Xia; Wu, Genhua

    2014-08-01

    Geometrical optimization of atomic clusters is performed by a development of adaptive immune optimization algorithm (AIOA) with dynamic lattice searching (DLS) operation (AIOA-DLS method). By a cycle of construction and searching of the dynamic lattice (DL), DLS algorithm rapidly makes the clusters more regular and greatly reduces the potential energy. DLS can thus be used as an operation acting on the new individuals after mutation operation in AIOA to improve the performance of the AIOA. The AIOA-DLS method combines the merit of evolutionary algorithm and idea of dynamic lattice. The performance of the proposed method is investigated in the optimization of Lennard-Jones clusters within 250 atoms and silver clusters described by many-body Gupta potential within 150 atoms. Results reported in the literature are reproduced, and the motif of Ag61 cluster is found to be stacking-fault face-centered cubic, whose energy is lower than that of previously obtained icosahedron.

  15. Hybridisations of Variable Neighbourhood Search and Modified Simplex Elements to Harmony Search and Shuffled Frog Leaping Algorithms for Process Optimisations

    NASA Astrophysics Data System (ADS)

    Aungkulanon, P.; Luangpaiboon, P.

    2010-10-01

    Nowadays, the engineering problem systems are large and complicated. An effective finite sequence of instructions for solving these problems can be categorised into optimisation and meta-heuristic algorithms. Though the best decision variable levels from some sets of available alternatives cannot be done, meta-heuristics is an alternative for experience-based techniques that rapidly help in problem solving, learning and discovery in the hope of obtaining a more efficient or more robust procedure. All meta-heuristics provide auxiliary procedures in terms of their own tooled box functions. It has been shown that the effectiveness of all meta-heuristics depends almost exclusively on these auxiliary functions. In fact, the auxiliary procedure from one can be implemented into other meta-heuristics. Well-known meta-heuristics of harmony search (HSA) and shuffled frog-leaping algorithms (SFLA) are compared with their hybridisations. HSA is used to produce a near optimal solution under a consideration of the perfect state of harmony of the improvisation process of musicians. A meta-heuristic of the SFLA, based on a population, is a cooperative search metaphor inspired by natural memetics. It includes elements of local search and global information exchange. This study presents solution procedures via constrained and unconstrained problems with different natures of single and multi peak surfaces including a curved ridge surface. Both meta-heuristics are modified via variable neighbourhood search method (VNSM) philosophy including a modified simplex method (MSM). The basic idea is the change of neighbourhoods during searching for a better solution. The hybridisations proceed by a descent method to a local minimum exploring then, systematically or at random, increasingly distant neighbourhoods of this local solution. The results show that the variant of HSA with VNSM and MSM seems to be better in terms of the mean and variance of design points and yields.

  16. Identification of alternative splice variants in Aspergillus flavus through comparison of multiple tandem MS search algorithms

    PubMed Central

    2011-01-01

    Background Database searching is the most frequently used approach for automated peptide assignment and protein inference of tandem mass spectra. The results, however, depend on the sequences in target databases and on search algorithms. Recently by using an alternative splicing database, we identified more proteins than with the annotated proteins in Aspergillus flavus. In this study, we aimed at finding a greater number of eligible splice variants based on newly available transcript sequences and the latest genome annotation. The improved database was then used to compare four search algorithms: Mascot, OMSSA, X! Tandem, and InsPecT. Results The updated alternative splicing database predicted 15833 putative protein variants, 61% more than the previous results. There was transcript evidence for 50% of the updated genes compared to the previous 35% coverage. Database searches were conducted using the same set of spectral data, search parameters, and protein database but with different algorithms. The false discovery rates of the peptide-spectrum matches were estimated < 2%. The numbers of the total identified proteins varied from 765 to 867 between algorithms. Whereas 42% (1651/3891) of peptide assignments were unanimous, the comparison showed that 51% (568/1114) of the RefSeq proteins and 15% (11/72) of the putative splice variants were inferred by all algorithms. 12 plausible isoforms were discovered by focusing on the consensus peptides which were detected by at least three different algorithms. The analysis found different conserved domains in two putative isoforms of UDP-galactose 4-epimerase. Conclusions We were able to detect dozens of new peptides using the improved alternative splicing database with the recently updated annotation of the A. flavus genome. Unlike the identifications of the peptides and the RefSeq proteins, large variations existed between the putative splice variants identified by different algorithms. 12 candidates of putative isoforms

  17. Fast String Search on Multicore Processors: Mapping fundamental algorithms onto parallel hardware

    SciTech Connect

    Scarpazza, Daniele P.; Villa, Oreste; Petrini, Fabrizio

    2008-04-01

    String searching is one of these basic algorithms. It has a host of applications, including search engines, network intrusion detection, virus scanners, spam filters, and DNA analysis, among others. The Cell processor, with its multiple cores, promises to speed-up string searching a lot. In this article, we show how we mapped string searching efficiently on the Cell. We present two implementations: • The fast implementation supports a small dictionary size (approximately 100 patterns) and provides a throughput of 40 Gbps, which is 100 times faster than reference implementations on x86 architectures. • The heavy-duty implementation is slower (3.3-4.3 Gbps), but supports dictionaries with tens of thousands of strings.

  18. A Novel Algorithm for Validating Peptide Identification from a Shotgun Proteomics Search Engine

    PubMed Central

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Zheng, Mu; Jennings, Jennifer L.; Hoek, Kristen L.; Allos, Tara; Howard., Leigh M.; Edwards, Kathryn M.; Weil, P. Anthony; Link, Andrew J.

    2013-01-01

    Liquid chromatography coupled with tandem mass spectrometry has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC/MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm based on the resolution and mass accuracy of the mass spectrometer employed in the LC/MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines. PMID:23402659

  19. A novel algorithm for validating peptide identification from a shotgun proteomics search engine.

    PubMed

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J

    2013-03-01

    Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines.

  20. A novel algorithm for validating peptide identification from a shotgun proteomics search engine.

    PubMed

    Jian, Ling; Niu, Xinnan; Xia, Zhonghang; Samir, Parimal; Sumanasekera, Chiranthani; Mu, Zheng; Jennings, Jennifer L; Hoek, Kristen L; Allos, Tara; Howard, Leigh M; Edwards, Kathryn M; Weil, P Anthony; Link, Andrew J

    2013-03-01

    Liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS) has revolutionized the proteomics analysis of complexes, cells, and tissues. In a typical proteomic analysis, the tandem mass spectra from a LC-MS/MS experiment are assigned to a peptide by a search engine that compares the experimental MS/MS peptide data to theoretical peptide sequences in a protein database. The peptide spectra matches are then used to infer a list of identified proteins in the original sample. However, the search engines often fail to distinguish between correct and incorrect peptides assignments. In this study, we designed and implemented a novel algorithm called De-Noise to reduce the number of incorrect peptide matches and maximize the number of correct peptides at a fixed false discovery rate using a minimal number of scoring outputs from the SEQUEST search engine. The novel algorithm uses a three-step process: data cleaning, data refining through a SVM-based decision function, and a final data refining step based on proteolytic peptide patterns. Using proteomics data generated on different types of mass spectrometers, we optimized the De-Noise algorithm on the basis of the resolution and mass accuracy of the mass spectrometer employed in the LC-MS/MS experiment. Our results demonstrate De-Noise improves peptide identification compared to other methods used to process the peptide sequence matches assigned by SEQUEST. Because De-Noise uses a limited number of scoring attributes, it can be easily implemented with other search engines. PMID:23402659

  1. A fuzzy discrete harmony search algorithm applied to annual cost reduction in radial distribution systems

    NASA Astrophysics Data System (ADS)

    Ameli, Kazem; Alfi, Alireza; Aghaebrahimi, Mohammadreza

    2016-09-01

    Similarly to other optimization algorithms, harmony search (HS) is quite sensitive to the tuning parameters. Several variants of the HS algorithm have been developed to decrease the parameter-dependency character of HS. This article proposes a novel version of the discrete harmony search (DHS) algorithm, namely fuzzy discrete harmony search (FDHS), for optimizing capacitor placement in distribution systems. In the FDHS, a fuzzy system is employed to dynamically adjust two parameter values, i.e. harmony memory considering rate and pitch adjusting rate, with respect to normalized mean fitness of the harmony memory. The key aspect of FDHS is that it needs substantially fewer iterations to reach convergence in comparison with classical discrete harmony search (CDHS). To the authors' knowledge, this is the first application of DHS to specify appropriate capacitor locations and their best amounts in the distribution systems. Simulations are provided for 10-, 34-, 85- and 141-bus distribution systems using CDHS and FDHS. The results show the effectiveness of FDHS over previous related studies.

  2. Search-matching algorithm for acoustics-based automatic sniper localization

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan R.; Salinas, Renato A.; Abidi, Mongi A.

    2007-04-01

    Most of modern automatic sniper localization systems are based on the utilization of the acoustical emissions produced by the gun fire events. In order to estimate the spatial coordinates of the sniper location, these systems measures the time delay of arrival of the acoustical shock wave fronts to a microphone array. In more advanced systems, model based estimation of the nonlinear distortion parameters of the N-waves is used to make projectile trajectory and calibre estimations. In this work we address the sniper localization problem using a model based search-matching approach. The automatic sniper localization algorithm works searching for the acoustics model of ballistic shock waves which best matches the measured data. For this purpose, we implement a previously released acoustics model of ballistic shock waves. Further, the sniper location, the projectile trajectory and calibre, and the muzzle velocity are regarded as the inputs variables of such a model. A search algorithm is implemented in order to found what combination of the input variables minimize a fitness function defined as the distance between measured and simulated data. In such a way, the sniper location, the projectile trajectory and calibre, and the muzzle velocity can be found. In order to evaluate the performance of the algorithm, we conduct computer based experiments using simulated gunfire event data calculated at the nodes of a virtual distributed sensor network. Preliminary simulation results are quite promising showing fast convergence of the algorithm and good localization accuracy.

  3. Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search.

    PubMed

    Rajalakshmi, S; Maguteeswaran, R

    2015-01-01

    A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established. PMID:26495429

  4. Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search.

    PubMed

    Rajalakshmi, S; Maguteeswaran, R

    2015-01-01

    A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established.

  5. Hybrid Binary Imperialist Competition Algorithm and Tabu Search Approach for Feature Selection Using Gene Expression Data.

    PubMed

    Wang, Shuaiqun; Aorigele; Kong, Wei; Zeng, Weiming; Hong, Xiaomin

    2016-01-01

    Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes. PMID:27579323

  6. Quality of Service Routing in Manet Using a Hybrid Intelligent Algorithm Inspired by Cuckoo Search

    PubMed Central

    Rajalakshmi, S.; Maguteeswaran, R.

    2015-01-01

    A hybrid computational intelligent algorithm is proposed by integrating the salient features of two different heuristic techniques to solve a multiconstrained Quality of Service Routing (QoSR) problem in Mobile Ad Hoc Networks (MANETs) is presented. The QoSR is always a tricky problem to determine an optimum route that satisfies variety of necessary constraints in a MANET. The problem is also declared as NP-hard due to the nature of constant topology variation of the MANETs. Thus a solution technique that embarks upon the challenges of the QoSR problem is needed to be underpinned. This paper proposes a hybrid algorithm by modifying the Cuckoo Search Algorithm (CSA) with the new position updating mechanism. This updating mechanism is derived from the differential evolution (DE) algorithm, where the candidates learn from diversified search regions. Thus the CSA will act as the main search procedure guided by the updating mechanism derived from DE, called tuned CSA (TCSA). Numerical simulations on MANETs are performed to demonstrate the effectiveness of the proposed TCSA method by determining an optimum route that satisfies various Quality of Service (QoS) constraints. The results are compared with some of the existing techniques in the literature; therefore the superiority of the proposed method is established. PMID:26495429

  7. Hybrid Binary Imperialist Competition Algorithm and Tabu Search Approach for Feature Selection Using Gene Expression Data

    PubMed Central

    Aorigele; Zeng, Weiming; Hong, Xiaomin

    2016-01-01

    Gene expression data composed of thousands of genes play an important role in classification platforms and disease diagnosis. Hence, it is vital to select a small subset of salient features over a large number of gene expression data. Lately, many researchers devote themselves to feature selection using diverse computational intelligence methods. However, in the progress of selecting informative genes, many computational methods face difficulties in selecting small subsets for cancer classification due to the huge number of genes (high dimension) compared to the small number of samples, noisy genes, and irrelevant genes. In this paper, we propose a new hybrid algorithm HICATS incorporating imperialist competition algorithm (ICA) which performs global search and tabu search (TS) that conducts fine-tuned search. In order to verify the performance of the proposed algorithm HICATS, we have tested it on 10 well-known benchmark gene expression classification datasets with dimensions varying from 2308 to 12600. The performance of our proposed method proved to be superior to other related works including the conventional version of binary optimization algorithm in terms of classification accuracy and the number of selected genes. PMID:27579323

  8. Free Energy-Based Conformational Search Algorithm Using the Movable Type Sampling Method.

    PubMed

    Pan, Li-Li; Zheng, Zheng; Wang, Ting; Merz, Kenneth M

    2015-12-01

    In this article, we extend the movable type (MT) sampling method to molecular conformational searches (MT-CS) on the free energy surface of the molecule in question. Differing from traditional systematic and stochastic searching algorithms, this method uses Boltzmann energy information to facilitate the selection of the best conformations. The generated ensembles provided good coverage of the available conformational space including available crystal structures. Furthermore, our approach directly provides the solvation free energies and the relative gas and aqueous phase free energies for all generated conformers. The method is validated by a thorough analysis of thrombin ligands as well as against structures extracted from both the Protein Data Bank (PDB) and the Cambridge Structural Database (CSD). An in-depth comparison between OMEGA and MT-CS is presented to illustrate the differences between the two conformational searching strategies, i.e., energy-based versus free energy-based searching. These studies demonstrate that our MT-based ligand conformational search algorithm is a powerful approach to delineate the conformational ensembles of molecular species on free energy surfaces.

  9. Optimal design of groundwater remediation systems using a multi-objective fast harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Qiankun; Wu, Jianfeng; Sun, Xiaomin; Yang, Yun; Wu, Jichun

    2012-12-01

    A new multi-objective optimization methodology is developed, whereby a multi-objective fast harmony search (MOFHS) is coupled with a groundwater flow and transport model to search for optimal design of groundwater remediation systems under general hydrogeological conditions. The MOFHS incorporates the niche technique into the previously improved fast harmony search and is enhanced by adding the Pareto solution set filter and an elite individual preservation strategy to guarantee uniformity and integrity of the Pareto front of multi-objective optimization problems. Also, the operation library of individual fitness is introduced to improve calculation speed. Moreover, the MOFHS is coupled with the commonly used flow and transport codes MODFLOW and MT3DMS, to search for optimal design of pump-and-treat systems, aiming at minimization of the remediation cost and minimization of the mass remaining in aquifers. Compared with three existing multi-objective optimization methods, including the improved niched Pareto genetic algorithm (INPGA), the non-dominated sorting genetic algorithm II (NSGAII), and the multi-objective harmony search (MOHS), the proposed methodology then demonstrated its applicability and efficiency through a two-dimensional hypothetical test problem and a three-dimensional field problem in Indiana (USA).

  10. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.

    PubMed

    Cheng, Jing; Xia, Linyuan

    2016-08-31

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.

  11. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network

    PubMed Central

    Cheng, Jing; Xia, Linyuan

    2016-01-01

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm. PMID:27589756

  12. An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network.

    PubMed

    Cheng, Jing; Xia, Linyuan

    2016-01-01

    Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm. PMID:27589756

  13. A graph isomorphism algorithm using signatures computed via quantum walk search model

    NASA Astrophysics Data System (ADS)

    Wang, Huiquan; Wu, Junjie; Yang, Xuejun; Yi, Xun

    2015-03-01

    In this paper, we propose a new algorithm based on a quantum walk search model to distinguish strongly similar graphs. Our algorithm computes a signature for each graph via the quantum walk search model and uses signatures to distinguish non-isomorphic graphs. Our method is less complex than those of previous works. In addition, our algorithm can be extended by raising the signature levels. The higher the level adopted, the stronger the distinguishing ability and the higher the complexity of the algorithm. Our algorithm was tested with standard benchmarks from four databases. We note that the weakest signature at level 1 can distinguish all similar graphs, with a time complexity of O({{N}3.5}), which that outperforms the previous best work except when it comes to strongly regular graphs (SRGs). Once the signature is raised to level 3, all SRGs tested can be distinguished successfully. In this case, the time complexity is O({{N}5.5}), also better than the previous best work.

  14. Parallel simulations of Grover's algorithm for closest match search in neutron monitor data

    NASA Astrophysics Data System (ADS)

    Kussainov, Arman; White, Yelena

    We are studying the parallel implementations of Grover's closest match search algorithm for neutron monitor data analysis. This includes data formatting, and matching quantum parameters to a conventional structure of a chosen programming language and selected experimental data type. We have employed several workload distribution models based on acquired data and search parameters. As a result of these simulations, we have an understanding of potential problems that may arise during configuration of real quantum computational devices and the way they could run tasks in parallel. The work was supported by the Science Committee of the Ministry of Science and Education of the Republic of Kazakhstan Grant #2532/GF3.

  15. A Comparison of Simple Algorithms for Gamma-ray Spectrometers in Radioactive Source Search Applications

    SciTech Connect

    Jarman, Kenneth D.; Runkle, Robert C.; Anderson, Kevin K.; Pfund, David M.

    2008-03-01

    Large variation in time-dependent ambient gamma-ray radiation challenges the search for radiation sources. A common strategy to reduce the effects of background variation is to raise detection thresholds, but at the price of reduced detection sensitivity. We present simple algorithms that both reduce background variation and maintain trip-wire detection sensitivity with gamma-ray spectrometry. The best-performing algorithms focus on the spectral shape over several energy bins using Spectral Comparison Ratios and dynamically predict background with the Kalman Filter.

  16. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: sensitivity and specificity analysis.

    PubMed

    Kapp, Eugene A; Schütz, Frédéric; Connolly, Lisa M; Chakel, John A; Meza, Jose E; Miller, Christine A; Fenyo, David; Eng, Jimmy K; Adkins, Joshua N; Omenn, Gilbert S; Simpson, Richard J

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X!Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, PeptideProphet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X!Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of "consensus scoring", i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  17. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  18. Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm

    NASA Astrophysics Data System (ADS)

    Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi

    2014-01-01

    This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.

  19. Spectrum parameter estimation in Brillouin scattering distributed temperature sensor based on cuckoo search algorithm combined with the improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yanjun; Yu, Chunjuan; Fu, Xinghu; Liu, Wenzhe; Bi, Weihong

    2015-12-01

    In the distributed optical fiber sensing system based on Brillouin scattering, strain and temperature are the main measuring parameters which can be obtained by analyzing the Brillouin center frequency shift. The novel algorithm which combines the cuckoo search algorithm (CS) with the improved differential evolution (IDE) algorithm is proposed for the Brillouin scattering parameter estimation. The CS-IDE algorithm is compared with CS algorithm and analyzed in different situation. The results show that both the CS and CS-IDE algorithm have very good convergence. The analysis reveals that the CS-IDE algorithm can extract the scattering spectrum features with different linear weight ratio, linewidth combination and SNR. Moreover, the BOTDR temperature measuring system based on electron optical frequency shift is set up to verify the effectiveness of the CS-IDE algorithm. Experimental results show that there is a good linear relationship between the Brillouin center frequency shift and temperature changes.

  20. Heuristic-based tabu search algorithm for folding two-dimensional AB off-lattice model proteins.

    PubMed

    Liu, Jingfa; Sun, Yuanyuan; Li, Gang; Song, Beibei; Huang, Weibo

    2013-12-01

    The protein structure prediction problem is a classical NP hard problem in bioinformatics. The lack of an effective global optimization method is the key obstacle in solving this problem. As one of the global optimization algorithms, tabu search (TS) algorithm has been successfully applied in many optimization problems. We define the new neighborhood conformation, tabu object and acceptance criteria of current conformation based on the original TS algorithm and put forward an improved TS algorithm. By integrating the heuristic initialization mechanism, the heuristic conformation updating mechanism, and the gradient method into the improved TS algorithm, a heuristic-based tabu search (HTS) algorithm is presented for predicting the two-dimensional (2D) protein folding structure in AB off-lattice model which consists of hydrophobic (A) and hydrophilic (B) monomers. The tabu search minimization leads to the basins of local minima, near which a local search mechanism is then proposed to further search for lower-energy conformations. To test the performance of the proposed algorithm, experiments are performed on four Fibonacci sequences and two real protein sequences. The experimental results show that the proposed algorithm has found the lowest-energy conformations so far for three shorter Fibonacci sequences and renewed the results for the longest one, as well as two real protein sequences, demonstrating that the HTS algorithm is quite promising in finding the ground states for AB off-lattice model proteins. PMID:24077543

  1. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  2. A prefiltered cuckoo search algorithm with geometric operators for solving Sudoku problems.

    PubMed

    Soto, Ricardo; Crawford, Broderick; Galleguillos, Cristian; Monfroy, Eric; Paredes, Fernando

    2014-01-01

    The Sudoku is a famous logic-placement game, originally popularized in Japan and today widely employed as pastime and as testbed for search algorithms. The classic Sudoku consists in filling a 9 × 9 grid, divided into nine 3 × 3 regions, so that each column, row, and region contains different digits from 1 to 9. This game is known to be NP-complete, with existing various complete and incomplete search algorithms able to solve different instances of it. In this paper, we present a new cuckoo search algorithm for solving Sudoku puzzles combining prefiltering phases and geometric operations. The geometric operators allow one to correctly move toward promising regions of the combinatorial space, while the prefiltering phases are able to previously delete from domains the values that do not conduct to any feasible solution. This integration leads to a more efficient domain filtering and as a consequence to a faster solving process. We illustrate encouraging experimental results where our approach noticeably competes with the best approximate methods reported in the literature. PMID:24707205

  3. An adaptive image enhancement technique by combining cuckoo search and particle swarm optimization algorithm.

    PubMed

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.

  4. A prefiltered cuckoo search algorithm with geometric operators for solving Sudoku problems.

    PubMed

    Soto, Ricardo; Crawford, Broderick; Galleguillos, Cristian; Monfroy, Eric; Paredes, Fernando

    2014-01-01

    The Sudoku is a famous logic-placement game, originally popularized in Japan and today widely employed as pastime and as testbed for search algorithms. The classic Sudoku consists in filling a 9 × 9 grid, divided into nine 3 × 3 regions, so that each column, row, and region contains different digits from 1 to 9. This game is known to be NP-complete, with existing various complete and incomplete search algorithms able to solve different instances of it. In this paper, we present a new cuckoo search algorithm for solving Sudoku puzzles combining prefiltering phases and geometric operations. The geometric operators allow one to correctly move toward promising regions of the combinatorial space, while the prefiltering phases are able to previously delete from domains the values that do not conduct to any feasible solution. This integration leads to a more efficient domain filtering and as a consequence to a faster solving process. We illustrate encouraging experimental results where our approach noticeably competes with the best approximate methods reported in the literature.

  5. An Adaptive Image Enhancement Technique by Combining Cuckoo Search and Particle Swarm Optimization Algorithm

    PubMed Central

    Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei

    2015-01-01

    Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928

  6. A simple algorithm to compute the peak power output of GaAs/Ge solar cells on the Martian surface

    SciTech Connect

    Glueck, P.R.; Bahrami, K.A.

    1995-12-31

    The Jet Propulsion Laboratory`s (JPL`s) Mars Pathfinder Project will deploy a robotic ``microrover`` on the surface of Mars in the summer of 1997. This vehicle will derive primary power from a GaAs/Ge solar array during the day and will ``sleep`` at night. This strategy requires that the rover be able to (1) determine when it is necessary to save the contents of volatile memory late in the afternoon and (2) determine when sufficient power is available to resume operations in the morning. An algorithm was developed that estimates the peak power point of the solar array from the solar array short-circuit current and temperature telemetry, and provides functional redundancy for both measurements using the open-circuit voltage telemetry. The algorithm minimizes vehicle processing and memory utilization by using linear equations instead of look-up tables to estimate peak power with very little loss in accuracy. This paper describes the method used to obtain the algorithm and presents the detailed algorithm design.

  7. An improved hybrid encoding cuckoo search algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Jia, Ke; He, Yichao

    2014-01-01

    Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions. PMID:24527026

  8. An improved hybrid encoding cuckoo search algorithm for 0-1 knapsack problems.

    PubMed

    Feng, Yanhong; Jia, Ke; He, Yichao

    2014-01-01

    Cuckoo search (CS) is a new robust swarm intelligence method that is based on the brood parasitism of some cuckoo species. In this paper, an improved hybrid encoding cuckoo search algorithm (ICS) with greedy strategy is put forward for solving 0-1 knapsack problems. First of all, for solving binary optimization problem with ICS, based on the idea of individual hybrid encoding, the cuckoo search over a continuous space is transformed into the synchronous evolution search over discrete space. Subsequently, the concept of confidence interval (CI) is introduced; hence, the new position updating is designed and genetic mutation with a small probability is introduced. The former enables the population to move towards the global best solution rapidly in every generation, and the latter can effectively prevent the ICS from trapping into the local optimum. Furthermore, the greedy transform method is used to repair the infeasible solution and optimize the feasible solution. Experiments with a large number of KP instances show the effectiveness of the proposed algorithm and its ability to achieve good quality solutions.

  9. A cuckoo search algorithm by Lévy flights for solving reliability redundancy allocation problems

    NASA Astrophysics Data System (ADS)

    Valian, Ehsan; Valian, Elham

    2013-11-01

    A new metaheuristic optimization algorithm, called cuckoo search (CS), was recently developed by Yang and Deb (2009, 2010). This article uses CS and Lévy flights to solve the reliability redundancy allocation problem. The redundancy allocation problem involves setting reliability objectives for components or subsystems in order to meet the resource consumption constraint, e.g. the total cost. The difficulties facing the redundancy allocation problem are to maintain feasibility with respect to three nonlinear constraints, namely, cost, weight and volume-related constraints. The redundancy allocation problems have been studied in the literature for decades, usually using mathematical programming or metaheuristic optimization algorithms. The performance of the algorithm is tested on five well-known reliability redundancy allocation problems and is compared with several well-known methods. Simulation results demonstrate that the optimal solutions obtained by CS are better than the best solutions obtained by other methods.

  10. A fast approximate nearest neighbor search algorithm in the Hamming space.

    PubMed

    Esmaeili, Mani Malek; Ward, Rabab Kreidieh; Fatourechi, Mehrdad

    2012-12-01

    A fast approximate nearest neighbor search algorithm for the (binary) Hamming space is proposed. The proposed Error Weighted Hashing (EWH) algorithm is up to 20 times faster than the popular locality sensitive hashing (LSH) algorithm and works well even for large nearest neighbor distances where LSH fails. EWH significantly reduces the number of candidate nearest neighbors by weighing them based on the difference between their hash vectors. EWH can be used for multimedia retrieval and copy detection systems that are based on binary fingerprinting. On a fingerprint database with more than 1,000 videos, for a specific detection accuracy, we demonstrate that EWH is more than 10 times faster than LSH. For the same retrieval time, we show that EWH has a significantly better detection accuracy with a 15 times lower error rate.

  11. MIDAS: a database-searching algorithm for metabolite identification in metabolomics.

    PubMed

    Wang, Yingfeng; Kora, Guruprasad; Bowen, Benjamin P; Pan, Chongle

    2014-10-01

    A database searching approach can be used for metabolite identification in metabolomics by matching measured tandem mass spectra (MS/MS) against the predicted fragments of metabolites in a database. Here, we present the open-source MIDAS algorithm (Metabolite Identification via Database Searching). To evaluate a metabolite-spectrum match (MSM), MIDAS first enumerates possible fragments from a metabolite by systematic bond dissociation, then calculates the plausibility of the fragments based on their fragmentation pathways, and finally scores the MSM to assess how well the experimental MS/MS spectrum from collision-induced dissociation (CID) is explained by the metabolite's predicted CID MS/MS spectrum. MIDAS was designed to search high-resolution tandem mass spectra acquired on time-of-flight or Orbitrap mass spectrometer against a metabolite database in an automated and high-throughput manner. The accuracy of metabolite identification by MIDAS was benchmarked using four sets of standard tandem mass spectra from MassBank. On average, for 77% of original spectra and 84% of composite spectra, MIDAS correctly ranked the true compounds as the first MSMs out of all MetaCyc metabolites as decoys. MIDAS correctly identified 46% more original spectra and 59% more composite spectra at the first MSMs than an existing database-searching algorithm, MetFrag. MIDAS was showcased by searching a published real-world measurement of a metabolome from Synechococcus sp. PCC 7002 against the MetaCyc metabolite database. MIDAS identified many metabolites missed in the previous study. MIDAS identifications should be considered only as candidate metabolites, which need to be confirmed using standard compounds. To facilitate manual validation, MIDAS provides annotated spectra for MSMs and labels observed mass spectral peaks with predicted fragments. The database searching and manual validation can be performed online at http://midas.omicsbio.org.

  12. Truss Optimization for a Manned Nuclear Electric Space Vehicle using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Benford, Andrew; Tinker, Michael L.

    2004-01-01

    The purpose of this paper is to utilize the genetic algorithm (GA) optimization method for structural design of a nuclear propulsion vehicle. Genetic algorithms provide a guided, random search technique that mirrors biological adaptation. To verify the GA capabilities, other traditional optimization methods were used to generate results for comparison to the GA results, first for simple two-dimensional structures, and then for full-scale three-dimensional truss designs.

  13. Optimal Refueling Pattern Search for a CANDU Reactor Using a Genetic Algorithm

    SciTech Connect

    Quang Binh, DO; Gyuhong, ROH; Hangbok, CHOI

    2006-07-01

    This paper presents the results from the application of genetic algorithms to a refueling optimization of a Canada deuterium uranium (CANDU) reactor. This work aims at making a mathematical model of the refueling optimization problem including the objective function and constraints and developing a method based on genetic algorithms to solve the problem. The model of the optimization problem and the proposed method comply with the key features of the refueling strategy of the CANDU reactor which adopts an on-power refueling operation. In this study, a genetic algorithm combined with an elitism strategy was used to automatically search for the refueling patterns. The objective of the optimization was to maximize the discharge burn-up of the refueling bundles, minimize the maximum channel power, or minimize the maximum change in the zone controller unit (ZCU) water levels. A combination of these objectives was also investigated. The constraints include the discharge burn-up, maximum channel power, maximum bundle power, channel power peaking factor and the ZCU water level. A refueling pattern that represents the refueling rate and channels was coded by a one-dimensional binary chromosome, which is a string of binary numbers 0 and 1. A computer program was developed in FORTRAN 90 running on an HP 9000 workstation to conduct the search for the optimal refueling patterns for a CANDU reactor at the equilibrium state. The results showed that it was possible to apply genetic algorithms to automatically search for the refueling channels of the CANDU reactor. The optimal refueling patterns were compared with the solutions obtained from the AUTOREFUEL program and the results were consistent with each other. (authors)

  14. Transforming geocentric cartesian coordinates to geodetic coordinates by using differential search algorithm

    NASA Astrophysics Data System (ADS)

    Civicioglu, Pinar

    2012-09-01

    In order to solve numerous practical navigational, geodetic and astro-geodetic problems, it is necessary to transform geocentric cartesian coordinates into geodetic coordinates or vice versa. It is very easy to solve the problem of transforming geodetic coordinates into geocentric cartesian coordinates. On the other hand, it is rather difficult to solve the problem of transforming geocentric cartesian coordinates into geodetic coordinates as it is very hard to define a mathematical relationship between the geodetic latitude (φ) and the geocentric cartesian coordinates (X, Y, Z). In this paper, a new algorithm, the Differential Search Algorithm (DS), is presented to solve the problem of transforming the geocentric cartesian coordinates into geodetic coordinates and its performance is compared with the performances of the classical methods (i.e., Borkowski, 1989; Bowring, 1976; Fukushima, 2006; Heikkinen, 1982; Jones, 2002; Zhang, 2005; Borkowski, 1987; Shu, 2010 and Lin, 1995) and Computational-Intelligence algorithms (i.e., ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES). The statistical tests realized for the comparison of performances indicate that the problem-solving success of DS algorithm in transforming the geocentric cartesian coordinates into geodetic coordinates is higher than those of all classical methods and Computational-Intelligence algorithms used in this paper.

  15. Maximum-Likelihood Estimation With a Contracting-Grid Search Algorithm

    PubMed Central

    Hesterman, Jacob Y.; Caucci, Luca; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2010-01-01

    A fast search algorithm capable of operating in multi-dimensional spaces is introduced. As a sample application, we demonstrate its utility in the 2D and 3D maximum-likelihood position-estimation problem that arises in the processing of PMT signals to derive interaction locations in compact gamma cameras. We demonstrate that the algorithm can be parallelized in pipelines, and thereby efficiently implemented in specialized hardware, such as field-programmable gate arrays (FPGAs). A 2D implementation of the algorithm is achieved in Cell/BE processors, resulting in processing speeds above one million events per second, which is a 20× increase in speed over a conventional desktop machine. Graphics processing units (GPUs) are used for a 3D application of the algorithm, resulting in processing speeds of nearly 250,000 events per second which is a 250× increase in speed over a conventional desktop machine. These implementations indicate the viability of the algorithm for use in real-time imaging applications. PMID:20824155

  16. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  17. Hybrid water flow-like algorithm with Tabu search for traveling salesman problem

    NASA Astrophysics Data System (ADS)

    Bostamam, Jasmin M.; Othman, Zulaiha

    2016-08-01

    This paper presents a hybrid Water Flow-like Algorithm with Tabu Search for solving travelling salesman problem (WFA-TS-TSP).WFA has been proven its outstanding performances in solving TSP meanwhile TS is a conventional algorithm which has been used since decades to solve various combinatorial optimization problem including TSP. Hybridization between WFA with TS provides a better balance of exploration and exploitation criteria which are the key elements in determining the performance of one metaheuristic. TS use two different local search namely, 2opt and 3opt separately. The proposed WFA-TS-TSP is tested on 23 sets on the well-known benchmarked symmetric TSP instances. The result shows that the proposed WFA-TS-TSP has significant better quality solutions compared to WFA. The result also shows that the WFA-TS-TSP with 3-opt obtained the best quality solution. With the result obtained, it could be concluded that WFA has potential to be further improved by using hybrid technique or using better local search technique.

  18. An Efficient Exact Algorithm for the Motif Stem Search Problem over Large Alphabets.

    PubMed

    Yu, Qiang; Huo, Hongwei; Vitter, Jeffrey Scott; Huan, Jun; Nekrich, Yakov

    2015-01-01

    In recent years, there has been an increasing interest in planted (l, d) motif search (PMS) with applications to discovering significant segments in biological sequences. However, there has been little discussion about PMS over large alphabets. This paper focuses on motif stem search (MSS), which is recently introduced to search motifs on large-alphabet inputs. A motif stem is an l-length string with some wildcards. The goal of the MSS problem is to find a set of stems that represents a superset of all (l , d) motifs present in the input sequences, and the superset is expected to be as small as possible. The three main contributions of this paper are as follows: (1) We build motif stem representation more precisely by using regular expressions. (2) We give a method for generating all possible motif stems without redundant wildcards. (3) We propose an efficient exact algorithm, called StemFinder, for solving the MSS problem. Compared with the previous MSS algorithms, StemFinder runs much faster and reports fewer stems which represent a smaller superset of all (l, d) motifs. StemFinder is freely available at http://sites.google.com/site/feqond/stemfinder. PMID:26357225

  19. Optimizations of the energy grid search algorithm in continuous-energy Monte Carlo particle transport codes

    NASA Astrophysics Data System (ADS)

    Walsh, Jonathan A.; Romano, Paul K.; Forget, Benoit; Smith, Kord S.

    2015-11-01

    In this work we propose, implement, and test various optimizations of the typical energy grid-cross section pair lookup algorithm in Monte Carlo particle transport codes. The key feature common to all of the optimizations is a reduction in the length of the vector of energies that must be searched when locating the index of a particle's current energy. Other factors held constant, a reduction in energy vector length yields a reduction in CPU time. The computational methods we present here are physics-informed. That is, they are designed to utilize the physical information embedded in a simulation in order to reduce the length of the vector to be searched. More specifically, the optimizations take advantage of information about scattering kinematics, neutron cross section structure and data representation, and also the expected characteristics of a system's spatial flux distribution and energy spectrum. The methods that we present are implemented in the OpenMC Monte Carlo neutron transport code as part of this work. The gains in computational efficiency, as measured by overall code speedup, associated with each of the optimizations are demonstrated in both serial and multithreaded simulations of realistic systems. Depending on the system, simulation parameters, and optimization method employed, overall code speedup factors of 1.2-1.5, relative to the typical single-nuclide binary search algorithm, are routinely observed.

  20. Model-based Layer Estimation using a Hybrid Genetic/Gradient Search Optimization Algorithm

    SciTech Connect

    Chambers, D; Lehman, S; Dowla, F

    2007-05-17

    A particle swarm optimization (PSO) algorithm is combined with a gradient search method in a model-based approach for extracting interface positions in a one-dimensional multilayer structure from acoustic or radar reflections. The basic approach is to predict the reflection measurement using a simulation of one-dimensional wave propagation in a multi-layer, evaluate the error between prediction and measurement, and then update the simulation parameters to minimize the error. Gradient search methods alone fail due to the number of local minima in the error surface close to the desired global minimum. The PSO approach avoids this problem by randomly sampling the region of the error surface around the global minimum, but at the cost of a large number of evaluations of the simulator. The hybrid approach uses the PSO at the beginning to locate the general area around the global minimum then switches to the gradient search method to zero in on it. Examples of the algorithm applied to the detection of interior walls of a building from reflected ultra-wideband radar signals are shown. Other possible applications are optical inspection of coatings and ultrasonic measurement of multilayer structures.

  1. Full-Featured Search Algorithm for Negative Electron-Transfer Dissociation.

    PubMed

    Riley, Nicholas M; Bern, Marshall; Westphall, Michael S; Coon, Joshua J

    2016-08-01

    Negative electron-transfer dissociation (NETD) has emerged as a premier tool for peptide anion analysis, offering access to acidic post-translational modifications and regions of the proteome that are intractable with traditional positive-mode approaches. Whole-proteome scale characterization is now possible with NETD, but proper informatic tools are needed to capitalize on advances in instrumentation. Currently only one database search algorithm (OMSSA) can process NETD data. Here we implement NETD search capabilities into the Byonic platform to improve the sensitivity of negative-mode data analyses, and we benchmark these improvements using 90 min LC-MS/MS analyses of tryptic peptides from human embryonic stem cells. With this new algorithm for searching NETD data, we improved the number of successfully identified spectra by as much as 80% and identified 8665 unique peptides, 24 639 peptide spectral matches, and 1338 proteins in activated-ion NETD analyses, more than doubling identifications from previous negative-mode characterizations of the human proteome. Furthermore, we reanalyzed our recently published large-scale, multienzyme negative-mode yeast proteome data, improving peptide and peptide spectral match identifications and considerably increasing protein sequence coverage. In all, we show that new informatics tools, in combination with recent advances in data acquisition, can significantly improve proteome characterization in negative-mode approaches. PMID:27402189

  2. Property-based cascade genetic algorithms for tailored searches of metal-oxide nano-structures

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Saswata; Ghiringhelli, Luca M.; Marom, Noa

    2015-03-01

    There is considerable interest in the computational determination of structures of atomic clusters that are detected in spectroscopy experiments. It has been suggested that in photo-emission experiments performed on anions, isomers of small (TiO2)n clusters with high electron affinity (EA) are selectively observed rather than those with the lowest energy. For the theoretical modelling of these situations, searching for the energy global minimum of the potential energy surface (PES) is inefficient. By using such an approach, in fact, it is unlikely to find meta-stable isomers that have high EA or low ionization potential (IP), but energy significantly above the ground state. We present an extension to our recently developed ab initio cascade genetic algorithm, here tailored to conduct property-based (e.g., high EA, low IP) searches over the PES. The term cascade refers to a multi-stepped algorithm where successive steps employ a higher level of theory, and each step of the next level takes information obtained at the immediate lower level. The new algorithms are benchmarked and validated for (TiO2)n clusters (n = 3 - 10 , 15 , 20). -

  3. QSAR study of HCV NS5B polymerase inhibitors using the genetic algorithm-multiple linear regression (GA-MLR)

    PubMed Central

    Rafiei, Hamid; Khanzadeh, Marziyeh; Mozaffari, Shahla; Bostanifar, Mohammad Hassan; Avval, Zhila Mohajeri; Aalizadeh, Reza; Pourbasheer, Eslam

    2016-01-01

    Quantitative structure-activity relationship (QSAR) study has been employed for predicting the inhibitory activities of the Hepatitis C virus (HCV) NS5B polymerase inhibitors. A data set consisted of 72 compounds was selected, and then different types of molecular descriptors were calculated. The whole data set was split into a training set (80 % of the dataset) and a test set (20 % of the dataset) using principle component analysis. The stepwise (SW) and the genetic algorithm (GA) techniques were used as variable selection tools. Multiple linear regression method was then used to linearly correlate the selected descriptors with inhibitory activities. Several validation technique including leave-one-out and leave-group-out cross-validation, Y-randomization method were used to evaluate the internal capability of the derived models. The external prediction ability of the derived models was further analyzed using modified r2, concordance correlation coefficient values and Golbraikh and Tropsha acceptable model criteria's. Based on the derived results (GA-MLR), some new insights toward molecular structural requirements for obtaining better inhibitory activity were obtained. PMID:27065774

  4. QSAR study of HCV NS5B polymerase inhibitors using the genetic algorithm-multiple linear regression (GA-MLR).

    PubMed

    Rafiei, Hamid; Khanzadeh, Marziyeh; Mozaffari, Shahla; Bostanifar, Mohammad Hassan; Avval, Zhila Mohajeri; Aalizadeh, Reza; Pourbasheer, Eslam

    2016-01-01

    Quantitative structure-activity relationship (QSAR) study has been employed for predicting the inhibitory activities of the Hepatitis C virus (HCV) NS5B polymerase inhibitors . A data set consisted of 72 compounds was selected, and then different types of molecular descriptors were calculated. The whole data set was split into a training set (80 % of the dataset) and a test set (20 % of the dataset) using principle component analysis. The stepwise (SW) and the genetic algorithm (GA) techniques were used as variable selection tools. Multiple linear regression method was then used to linearly correlate the selected descriptors with inhibitory activities. Several validation technique including leave-one-out and leave-group-out cross-validation, Y-randomization method were used to evaluate the internal capability of the derived models. The external prediction ability of the derived models was further analyzed using modified r(2), concordance correlation coefficient values and Golbraikh and Tropsha acceptable model criteria's. Based on the derived results (GA-MLR), some new insights toward molecular structural requirements for obtaining better inhibitory activity were obtained. PMID:27065774

  5. Accurate Descriptions of Hot Flow Behaviors Across β Transus of Ti-6Al-4V Alloy by Intelligence Algorithm GA-SVR

    NASA Astrophysics Data System (ADS)

    Wang, Li-yong; Li, Le; Zhang, Zhi-hua

    2016-09-01

    Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.

  6. Accurate Descriptions of Hot Flow Behaviors Across β Transus of Ti-6Al-4V Alloy by Intelligence Algorithm GA-SVR

    NASA Astrophysics Data System (ADS)

    Wang, Li-yong; Li, Le; Zhang, Zhi-hua

    2016-07-01

    Hot compression tests of Ti-6Al-4V alloy in a wide temperature range of 1023-1323 K and strain rate range of 0.01-10 s-1 were conducted by a servo-hydraulic and computer-controlled Gleeble-3500 machine. In order to accurately and effectively characterize the highly nonlinear flow behaviors, support vector regression (SVR) which is a machine learning method was combined with genetic algorithm (GA) for characterizing the flow behaviors, namely, the GA-SVR. The prominent character of GA-SVR is that it with identical training parameters will keep training accuracy and prediction accuracy at a stable level in different attempts for a certain dataset. The learning abilities, generalization abilities, and modeling efficiencies of the mathematical regression model, ANN, and GA-SVR for Ti-6Al-4V alloy were detailedly compared. Comparison results show that the learning ability of the GA-SVR is stronger than the mathematical regression model. The generalization abilities and modeling efficiencies of these models were shown as follows in ascending order: the mathematical regression model < ANN < GA-SVR. The stress-strain data outside experimental conditions were predicted by the well-trained GA-SVR, which improved simulation accuracy of the load-stroke curve and can further improve the related research fields where stress-strain data play important roles, such as speculating work hardening and dynamic recovery, characterizing dynamic recrystallization evolution, and improving processing maps.

  7. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    PubMed

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature.

  8. Optimum wavelet based masking for the contrast enhancement of medical images using enhanced cuckoo search algorithm.

    PubMed

    Daniel, Ebenezer; Anitha, J

    2016-04-01

    Unsharp masking techniques are a prominent approach in contrast enhancement. Generalized masking formulation has static scale value selection, which limits the gain of contrast. In this paper, we propose an Optimum Wavelet Based Masking (OWBM) using Enhanced Cuckoo Search Algorithm (ECSA) for the contrast improvement of medical images. The ECSA can automatically adjust the ratio of nest rebuilding, using genetic operators such as adaptive crossover and mutation. First, the proposed contrast enhancement approach is validated quantitatively using Brain Web and MIAS database images. Later, the conventional nest rebuilding of cuckoo search optimization is modified using Adaptive Rebuilding of Worst Nests (ARWN). Experimental results are analyzed using various performance matrices, and our OWBM shows improved results as compared with other reported literature. PMID:26945462

  9. Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT-SVD.

    PubMed

    Bhandari, A K; Soni, V; Kumar, A; Singh, G K

    2014-07-01

    This paper presents a new contrast enhancement approach which is based on Cuckoo Search (CS) algorithm and DWT-SVD for quality improvement of the low contrast satellite images. The input image is decomposed into the four frequency subbands through Discrete Wavelet Transform (DWT), and CS algorithm used to optimize each subband of DWT and then obtains the singular value matrix of the low-low thresholded subband image and finally, it reconstructs the enhanced image by applying IDWT. The singular value matrix employed intensity information of the particular image, and any modification in the singular values changes the intensity of the given image. The experimental results show superiority of the proposed method performance in terms of PSNR, MSE, Mean and Standard Deviation over conventional and state-of-the-art techniques. PMID:24893835

  10. Power-law scaling for the adiabatic algorithm for search-engine ranking

    NASA Astrophysics Data System (ADS)

    Frees, Adam; Gamble, John King; Rudinger, Kenneth; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-09-01

    An important method for search engine result ranking works by finding the principal eigenvector of the “Google matrix.” Recently, a quantum algorithm for generating this eigenvector as a quantum state was presented, with evidence of an exponential speedup of this process for some scale-free networks. Here we show that the run time depends on features of the graphs other than the degree distribution, and can be altered sufficiently to rule out a general exponential speedup. According to our simulations, for a sample of graphs with degree distributions that are scale-free, with parameters thought to closely resemble the Web, the proposed algorithm for eigenvector preparation does not appear to run exponentially faster than the classical case.

  11. Broad-area search for targets in SAR imagery with context-adaptive algorithms

    NASA Astrophysics Data System (ADS)

    Patterson, Tim J.; Fairchild, Scott R.

    1996-06-01

    This paper describes an ATR system based on gray scale morphology which has proven very effective in performing broad area search for targets of interest. Gray scale morphology is used to extract several distinctive sets of features which combine intensity and spatial information. Results of direct comparisons with other algorithms are presented. In a series of tests which were scored independently the morphological approach has shown superior results. An automated training systems based on a combination of genetic algorithms and classification and regression trees is described. Further performance gains are expected by allowing context sensitive selection of parameter sets for the morphological processing. Context is acquired from the image using texture measures to identify the local clutter environment. The system is designed to be able to build new classifiers on the fly to match specific image to image variations.

  12. Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT-SVD.

    PubMed

    Bhandari, A K; Soni, V; Kumar, A; Singh, G K

    2014-07-01

    This paper presents a new contrast enhancement approach which is based on Cuckoo Search (CS) algorithm and DWT-SVD for quality improvement of the low contrast satellite images. The input image is decomposed into the four frequency subbands through Discrete Wavelet Transform (DWT), and CS algorithm used to optimize each subband of DWT and then obtains the singular value matrix of the low-low thresholded subband image and finally, it reconstructs the enhanced image by applying IDWT. The singular value matrix employed intensity information of the particular image, and any modification in the singular values changes the intensity of the given image. The experimental results show superiority of the proposed method performance in terms of PSNR, MSE, Mean and Standard Deviation over conventional and state-of-the-art techniques.

  13. Power law scaling for the adiabatic algorithm for search engine ranking

    NASA Astrophysics Data System (ADS)

    Frees, Adam; King Gamble, John; Rudinger, Kenneth; Bach, Eric; Friesen, Mark; Joynt, Robert; Coppersmith, S. N.

    2013-03-01

    An important method for search engine result ranking works by finding the principal eigenvector of the ``Google matrix.'' Recently, a quantum algorithm for this problem and evidence of an exponential speedup for some scale-free networks were presented. Here, we show that the run-time depends on features of the graphs other than the degree distribution, and can be altered sufficiently to rule out a general exponential speedup. For a sample of graphs with degree distributions that more closely resemble the Web than in the previous work, the proposed algorithm does not appear to run exponentially faster than the classical one. This work was supported in part by ARO, DOD (W911NF-09-1-0439) and NSF (CCR-0635355, DMR 0906951). A.F. acknowledges support from the NSF REU program (PHY-PIF-1104660)

  14. Designing LED Array for Uniform Illumination Based on Local Search Algorithm

    NASA Astrophysics Data System (ADS)

    Lei, P.; Wang, Q.; Zou, H.

    2014-03-01

    We propose a numerical optimization method based on local search algorithm to design an LED array for a highly uniform illumination distribution. In the first place, an initial LED array is randomly generated and the corresponding value of the objective function is calculated. In the second place, the value of the objective function is iteratively improved by applying local changes of the LED array until the objective function value can not be improved. This method can automatically design an array of LEDs with different luminous intensity value and distribution. Computer simulations show that the near-optimal LED array with highly uniform illumination distribution on target plane is obtained by this method.

  15. Understanding Air Transportation Market Dynamics Using a Search Algorithm for Calibrating Travel Demand and Price

    NASA Technical Reports Server (NTRS)

    Kumar, Vivek; Horio, Brant M.; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.

    2015-01-01

    This paper presents a search algorithm based framework to calibrate origin-destination (O-D) market specific airline ticket demands and prices for the Air Transportation System (ATS). This framework is used for calibrating an agent based model of the air ticket buy-sell process - Airline Evolutionary Simulation (Airline EVOS) -that has fidelity of detail that accounts for airline and consumer behaviors and the interdependencies they share between themselves and the NAS. More specificially, this algorithm simultaneous calibrates demand and airfares for each O-D market, to within specified threshold of a pre-specified target value. The proposed algorithm is illustrated with market data targets provided by the Transportation System Analysis Model (TSAM) and Airline Origin and Destination Survey (DB1B). Although we specify these models and datasources for this calibration exercise, the methods described in this paper are applicable to calibrating any low-level model of the ATS to some other demand forecast model-based data. We argue that using a calibration algorithm such as the one we present here to synchronize ATS models with specialized forecast demand models, is a powerful tool for establishing credible baseline conditions in experiments analyzing the effects of proposed policy changes to the ATS.

  16. VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm

    NASA Astrophysics Data System (ADS)

    Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo

    2015-01-01

    Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.

  17. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    NASA Astrophysics Data System (ADS)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or

  18. A Breeder Algorithm for Stellarator Optimization

    NASA Astrophysics Data System (ADS)

    Wang, S.; Ware, A. S.; Hirshman, S. P.; Spong, D. A.

    2003-10-01

    An optimization algorithm that combines the global parameter space search properties of a genetic algorithm (GA) with the local parameter search properties of a Levenberg-Marquardt (LM) algorithm is described. Optimization algorithms used in the design of stellarator configurations are often classified as either global (such as GA and differential evolution algorithm) or local (such as LM). While nonlinear least-squares methods such as LM are effective at minimizing a cost-function based on desirable plasma properties such as quasi-symmetry and ballooning stability, whether or not this is a local or global minimum is unknown. The advantage of evolutionary algorithms such as GA is that they search a wider range of parameter space and are not susceptible to getting stuck in a local minimum of the cost function. Their disadvantage is that in some cases the evolutionary algorithms are ineffective at finding a minimum state. Here, we describe the initial development of the Breeder Algorithm (BA). BA consists of a genetic algorithm outer loop with an inner loop in which each generation is refined using a LM step. Initial results for a quasi-poloidal stellarator optimization will be presented, along with a comparison to existing optimization algorithms.

  19. A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles.

    PubMed

    Soto, Ricardo; Crawford, Broderick; Galleguillos, Cristian; Paredes, Fernando; Norero, Enrique

    2015-01-01

    The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n(2) × n(2) grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n(2). Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods.

  20. Optimization process planning using hybrid genetic algorithm and intelligent search for job shop machining

    PubMed Central

    Salehi, Mojtaba

    2010-01-01

    Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020

  1. Application of Harmony Search algorithm to the solution of groundwater management models

    NASA Astrophysics Data System (ADS)

    Tamer Ayvaz, M.

    2009-06-01

    This study proposes a groundwater resources management model in which the solution is performed through a combined simulation-optimization model. A modular three-dimensional finite difference groundwater flow model, MODFLOW is used as the simulation model. This model is then combined with a Harmony Search (HS) optimization algorithm which is based on the musical process of searching for a perfect state of harmony. The performance of the proposed HS based management model is tested on three separate groundwater management problems: (i) maximization of total pumping from an aquifer (steady-state); (ii) minimization of the total pumping cost to satisfy the given demand (steady-state); and (iii) minimization of the pumping cost to satisfy the given demand for multiple management periods (transient). The sensitivity of HS algorithm is evaluated by performing a sensitivity analysis which aims to determine the impact of related solution parameters on convergence behavior. The results show that HS yields nearly same or better solutions than the previous solution methods and may be used to solve management problems in groundwater modeling.

  2. A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles

    PubMed Central

    Crawford, Broderick; Paredes, Fernando; Norero, Enrique

    2015-01-01

    The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n2 × n2 grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n2. Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods. PMID:26078751

  3. A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles.

    PubMed

    Soto, Ricardo; Crawford, Broderick; Galleguillos, Cristian; Paredes, Fernando; Norero, Enrique

    2015-01-01

    The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n(2) × n(2) grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n(2). Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods. PMID:26078751

  4. Taboo search algorithm for item assignment in synchronized zone automated order picking system

    NASA Astrophysics Data System (ADS)

    Wu, Yingying; Wu, Yaohua

    2014-07-01

    The idle time which is part of the order fulfillment time is decided by the number of items in the zone; therefore the item assignment method affects the picking efficiency. Whereas previous studies only focus on the balance of number of kinds of items between different zones but not the number of items and the idle time in each zone. In this paper, an idle factor is proposed to measure the idle time exactly. The idle factor is proven to obey the same vary trend with the idle time, so the object of this problem can be simplified from minimizing idle time to minimizing idle factor. Based on this, the model of item assignment problem in synchronized zone automated order picking system is built. The model is a form of relaxation of parallel machine scheduling problem which had been proven to be NP-complete. To solve the model, a taboo search algorithm is proposed. The main idea of the algorithm is minimizing the greatest idle factor of zones with the 2-exchange algorithm. Finally, the simulation which applies the data collected from a tobacco distribution center is conducted to evaluate the performance of the algorithm. The result verifies the model and shows the algorithm can do a steady work to reduce idle time and the idle time can be reduced by 45.63% on average. This research proposed an approach to measure the idle time in synchronized zone automated order picking system. The approach can improve the picking efficiency significantly and can be seen as theoretical basis when optimizing the synchronized automated order picking systems.

  5. High Resolution Direction of Arrival (DOA) Estimation Based on Improved Orthogonal Matching Pursuit (OMP) Algorithm by Iterative Local Searching

    PubMed Central

    Wang, Wenyi; Wu, Renbiao

    2013-01-01

    DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm. PMID:23974150

  6. High resolution direction of arrival (DOA) estimation based on improved orthogonal matching pursuit (OMP) algorithm by iterative local searching.

    PubMed

    Wang, Wenyi; Wu, Renbiao

    2013-01-01

    DOA (Direction of Arrival) estimation is a major problem in array signal processing applications. Recently, compressive sensing algorithms, including convex relaxation algorithms and greedy algorithms, have been recognized as a kind of novel DOA estimation algorithm. However, the success of these algorithms is limited by the RIP (Restricted Isometry Property) condition or the mutual coherence of measurement matrix. In the DOA estimation problem, the columns of measurement matrix are steering vectors corresponding to different DOAs. Thus, it violates the mutual coherence condition. The situation gets worse when there are two sources from two adjacent DOAs. In this paper, an algorithm based on OMP (Orthogonal Matching Pursuit), called ILS-OMP (Iterative Local Searching-Orthogonal Matching Pursuit), is proposed to improve DOA resolution by Iterative Local Searching. Firstly, the conventional OMP algorithm is used to obtain initial estimated DOAs. Then, in each iteration, a local searching process for every estimated DOA is utilized to find a new DOA in a given DOA set to further decrease the residual. Additionally, the estimated DOAs are updated by substituting the initial DOA with the new one. The simulation results demonstrate the advantages of the proposed algorithm. PMID:23974150

  7. Hybrid Genetic Algorithm - Local Search Method for Ground-Water Management

    NASA Astrophysics Data System (ADS)

    Chiu, Y.; Nishikawa, T.; Martin, P.

    2008-12-01

    Ground-water management problems commonly are formulated as a mixed-integer, non-linear programming problem (MINLP). Relying only on conventional gradient-search methods to solve the management problem is computationally fast; however, the methods may become trapped in a local optimum. Global-optimization schemes can identify the global optimum, but the convergence is very slow when the optimal solution approaches the global optimum. In this study, we developed a hybrid optimization scheme, which includes a genetic algorithm and a gradient-search method, to solve the MINLP. The genetic algorithm identifies a near- optimal solution, and the gradient search uses the near optimum to identify the global optimum. Our methodology is applied to a conjunctive-use project in the Warren ground-water basin, California. Hi- Desert Water District (HDWD), the primary water-manager in the basin, plans to construct a wastewater treatment plant to reduce future septic-tank effluent from reaching the ground-water system. The treated wastewater instead will recharge the ground-water basin via percolation ponds as part of a larger conjunctive-use strategy, subject to State regulations (e.g. minimum distances and travel times). HDWD wishes to identify the least-cost conjunctive-use strategies that control ground-water levels, meet regulations, and identify new production-well locations. As formulated, the MINLP objective is to minimize water-delivery costs subject to constraints including pump capacities, available recharge water, water-supply demand, water-level constraints, and potential new-well locations. The methodology was demonstrated by an enumerative search of the entire feasible solution and comparing the optimum solution with results from the branch-and-bound algorithm. The results also indicate that the hybrid method identifies the global optimum within an affordable computation time. Sensitivity analyses, which include testing different recharge-rate scenarios, pond

  8. Virtual parallel computing and a search algorithm using matrix product states.

    PubMed

    Chamon, Claudio; Mucciolo, Eduardo R

    2012-07-20

    We propose a form of parallel computing on classical computers that is based on matrix product states. The virtual parallelization is accomplished by representing bits with matrices and by evolving these matrices from an initial product state that encodes multiple inputs. Matrix evolution follows from the sequential application of gates, as in a logical circuit. The action by classical probabilistic one-bit and deterministic two-bit gates such as NAND are implemented in terms of matrix operations and, as opposed to quantum computing, it is possible to copy bits. We present a way to explore this method of computation to solve search problems and count the number of solutions. We argue that if the classical computational cost of testing solutions (witnesses) requires less than O(n2) local two-bit gates acting on n bits, the search problem can be fully solved in subexponential time. Therefore, for this restricted type of search problem, the virtual parallelization scheme is faster than Grover's quantum algorithm.

  9. Hashing algorithms and data structures for rapid searches of fingerprint vectors.

    PubMed

    Nasr, Ramzi; Hirschberg, Daniel S; Baldi, Pierre

    2010-08-23

    In many large chemoinformatics database systems, molecules are represented by long binary fingerprint vectors whose components record the presence or absence of particular functional groups or combinatorial features. To speed up database searches, we propose to add to each fingerprint a short signature integer vector of length M. For a given fingerprint, the i component of the signature vector counts the number of 1-bits in the fingerprint that fall on components congruent to i modulo M. Given two signatures, we show how one can rapidly compute a bound on the Jaccard-Tanimoto similarity measure of the two corresponding fingerprints, using the intersection bound. Thus, these signatures allow one to significantly prune the search space by discarding molecules associated with unfavorable bounds. Analytical methods are developed to predict the resulting amount of pruning as a function of M. Data structures combining different values of M are also developed together with methods for predicting the optimal values of M for a given implementation. Simulations using a particular implementation show that the proposed approach leads to a 1 order of magnitude speedup over a linear search and a 3-fold speedup over a previous implementation. All theoretical results and predictions are corroborated by large-scale simulations using molecules from the ChemDB. Several possible algorithmic extensions are discussed.

  10. Parallel machine scheduling with step-deteriorating jobs and setup times by a hybrid discrete cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Peng; Cheng, Wenming; Wang, Yi

    2015-11-01

    This article considers the parallel machine scheduling problem with step-deteriorating jobs and sequence-dependent setup times. The objective is to minimize the total tardiness by determining the allocation and sequence of jobs on identical parallel machines. In this problem, the processing time of each job is a step function dependent upon its starting time. An individual extended time is penalized when the starting time of a job is later than a specific deterioration date. The possibility of deterioration of a job makes the parallel machine scheduling problem more challenging than ordinary ones. A mixed integer programming model for the optimal solution is derived. Due to its NP-hard nature, a hybrid discrete cuckoo search algorithm is proposed to solve this problem. In order to generate a good initial swarm, a modified Biskup-Hermann-Gupta (BHG) heuristic called MBHG is incorporated into the population initialization. Several discrete operators are proposed in the random walk of Lévy flights and the crossover search. Moreover, a local search procedure based on variable neighbourhood descent is integrated into the algorithm as a hybrid strategy in order to improve the quality of elite solutions. Computational experiments are executed on two sets of randomly generated test instances. The results show that the proposed hybrid algorithm can yield better solutions in comparison with the commercial solver CPLEX® with a one hour time limit, the discrete cuckoo search algorithm and the existing variable neighbourhood search algorithm.

  11. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    PubMed Central

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  12. Study on Multi-stage Logistics System Design Problem with Inventory Considering Demand Change by Hybrid Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Inoue, Hisaki; Gen, Mitsuo

    The logistics model used in this study is 3-stage model employed by an automobile company, which aims to solve traffic problems at a total minimum cost. Recently, research on the metaheuristics method has advanced as an approximate means for solving optimization problems like this model. These problems can be solved using various methods such as the genetic algorithm (GA), simulated annealing, and tabu search. GA is superior in robustness and adjustability toward a change in the structure of these problems. However, GA has a disadvantage in that it has a slightly inefficient search performance because it carries out a multi-point search. A hybrid GA that combines another method is attracting considerable attention since it can compensate for a fault to a partial solution that early convergence gives a bad influence on a result. In this study, we propose a novel hybrid random key-based GA(h-rkGA) that combines local search and parameter tuning of crossover rate and mutation rate; h-rkGA is an improved version of the random key-based GA (rk-GA). We attempted comparative experiments with spanning tree-based GA, priority based GA and random key-based GA. Further, we attempted comparative experiments with “h-GA by only local search” and “h-GA by only parameter tuning”. We reported the effectiveness of the proposed method on the basis of the results of these experiments.

  13. Messy genetic algorithms: Recent developments

    SciTech Connect

    Kargupta, H.

    1996-09-01

    Messy genetic algorithms define a rare class of algorithms that realize the need for detecting appropriate relations among members of the search domain in optimization. This paper reviews earlier works in messy genetic algorithms and describes some recent developments. It also describes the gene expression messy GA (GEMGA)--an {Omicron}({Lambda}{sup {kappa}}({ell}{sup 2} + {kappa})) sample complexity algorithm for the class of order-{kappa} delineable problems (problems that can be solved by considering no higher than order-{kappa} relations) of size {ell} and alphabet size {Lambda}. Experimental results are presented to demonstrate the scalability of the GEMGA.

  14. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  15. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm

    PubMed Central

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-01-01

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms. PMID:26251910

  16. Security Analysis of Image Encryption Based on Gyrator Transform by Searching the Rotation Angle with Improved PSO Algorithm.

    PubMed

    Sang, Jun; Zhao, Jun; Xiang, Zhili; Cai, Bin; Xiang, Hong

    2015-08-05

    Gyrator transform has been widely used for image encryption recently. For gyrator transform-based image encryption, the rotation angle used in the gyrator transform is one of the secret keys. In this paper, by analyzing the properties of the gyrator transform, an improved particle swarm optimization (PSO) algorithm was proposed to search the rotation angle in a single gyrator transform. Since the gyrator transform is continuous, it is time-consuming to exhaustedly search the rotation angle, even considering the data precision in a computer. Therefore, a computational intelligence-based search may be an alternative choice. Considering the properties of severe local convergence and obvious global fluctuations of the gyrator transform, an improved PSO algorithm was proposed to be suitable for such situations. The experimental results demonstrated that the proposed improved PSO algorithm can significantly improve the efficiency of searching the rotation angle in a single gyrator transform. Since gyrator transform is the foundation of image encryption in gyrator transform domains, the research on the method of searching the rotation angle in a single gyrator transform is useful for further study on the security of such image encryption algorithms.

  17. Comparing Evolutionary Programs and Evolutionary Pattern Search Algorithms: A Drug Docking Application

    SciTech Connect

    Hart, W.E.

    1999-02-10

    Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAS) are two general classes of evolutionary methods for optimizing on continuous domains. The relative performance of these methods has been evaluated on standard global optimization test functions, and these results suggest that EPSAs more robustly converge to near-optimal solutions than EPs. In this paper we evaluate the relative performance of EPSAs and EPs on a real-world application: flexible ligand binding in the Autodock docking software. We compare the performance of these methods on a suite of docking test problems. Our results confirm that EPSAs and EPs have comparable performance, and they suggest that EPSAs may be more robust on larger, more complex problems.

  18. Adaptive polarimetric sensing for optimum radar signature classification using a genetic search algorithm.

    PubMed

    Sadjadi, Firooz A

    2006-08-01

    An automated technique for adaptive radar polarimetric pattern classification is described. The approach is based on a genetic algorithm that uses a probabilistic pattern separation distance function and searches for those transmit and receive states of polarization sensing angles that optimize this function. Seven pattern separation distance functions--the Rayleigh quotient, the Bhattacharyya, divergence, Kolmogorov, Matusta, Kullback-Leibler distances, and the Bayesian probability of error--are used on real, fully polarimetric synthetic aperture radar target signatures. Each of these signatures is represented as functions of transmit and receive polarization ellipticity angles and the angle of polarization ellipse. The results indicate that, based on the majority of the distance functions used, there is a unique set of state of polarization angles whose use will lead to improved classification performance.

  19. Intelligent energy allocation strategy for PHEV charging station using gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Rahman, Imran; Vasant, Pandian M.; Singh, Balbir Singh Mahinder; Abdullah-Al-Wadud, M.

    2014-10-01

    Recent researches towards the use of green technologies to reduce pollution and increase penetration of renewable energy sources in the transportation sector are gaining popularity. The development of the smart grid environment focusing on PHEVs may also heal some of the prevailing grid problems by enabling the implementation of Vehicle-to-Grid (V2G) concept. Intelligent energy management is an important issue which has already drawn much attention to researchers. Most of these works require formulation of mathematical models which extensively use computational intelligence-based optimization techniques to solve many technical problems. Higher penetration of PHEVs require adequate charging infrastructure as well as smart charging strategies. We used Gravitational Search Algorithm (GSA) to intelligently allocate energy to the PHEVs considering constraints such as energy price, remaining battery capacity, and remaining charging time.

  20. Fuzzy rule base design using tabu search algorithm for nonlinear system modeling.

    PubMed

    Bagis, Aytekin

    2008-01-01

    This paper presents an approach to fuzzy rule base design using tabu search algorithm (TSA) for nonlinear system modeling. TSA is used to evolve the structure and the parameter of fuzzy rule base. The use of the TSA, in conjunction with a systematic neighbourhood structure for the determination of fuzzy rule base parameters, leads to a significant improvement in the performance of the model. To demonstrate the effectiveness of the presented method, several numerical examples given in the literature are examined. The results obtained by means of the identified fuzzy rule bases are compared with those belonging to other modeling approaches in the literature. The simulation results indicate that the method based on the use of a TSA performs an important and very effective modeling procedure in fuzzy rule base design in the modeling of the nonlinear or complex systems. PMID:17945233

  1. A Multilevel Probabilistic Beam Search Algorithm for the Shortest Common Supersequence Problem

    PubMed Central

    Gallardo, José E.

    2012-01-01

    The shortest common supersequence problem is a classical problem with many applications in different fields such as planning, Artificial Intelligence and especially in Bioinformatics. Due to its NP-hardness, we can not expect to efficiently solve this problem using conventional exact techniques. This paper presents a heuristic to tackle this problem based on the use at different levels of a probabilistic variant of a classical heuristic known as Beam Search. The proposed algorithm is empirically analysed and compared to current approaches in the literature. Experiments show that it provides better quality solutions in a reasonable time for medium and large instances of the problem. For very large instances, our heuristic also provides better solutions, but required execution times may increase considerably. PMID:23300667

  2. A multilevel probabilistic beam search algorithm for the shortest common supersequence problem.

    PubMed

    Gallardo, José E

    2012-01-01

    The shortest common supersequence problem is a classical problem with many applications in different fields such as planning, Artificial Intelligence and especially in Bioinformatics. Due to its NP-hardness, we can not expect to efficiently solve this problem using conventional exact techniques. This paper presents a heuristic to tackle this problem based on the use at different levels of a probabilistic variant of a classical heuristic known as Beam Search. The proposed algorithm is empirically analysed and compared to current approaches in the literature. Experiments show that it provides better quality solutions in a reasonable time for medium and large instances of the problem. For very large instances, our heuristic also provides better solutions, but required execution times may increase considerably.

  3. Ocean feature recognition using genetic algorithms with fuzzy fitness functions (GA/F3)

    NASA Technical Reports Server (NTRS)

    Ankenbrandt, C. A.; Buckles, B. P.; Petry, F. E.; Lybanon, M.

    1990-01-01

    A model for genetic algorithms with semantic nets is derived for which the relationships between concepts is depicted as a semantic net. An organism represents the manner in which objects in a scene are attached to concepts in the net. Predicates between object pairs are continuous valued truth functions in the form of an inverse exponential function (e sub beta lxl). 1:n relationships are combined via the fuzzy OR (Max (...)). Finally, predicates between pairs of concepts are resolved by taking the average of the combined predicate values of the objects attached to the concept at the tail of the arc representing the predicate in the semantic net. The method is illustrated by applying it to the identification of oceanic features in the North Atlantic.

  4. Accelerating Smith-Waterman Algorithm for Biological Database Search on CUDA-Compatible GPUs

    NASA Astrophysics Data System (ADS)

    Munekawa, Yuma; Ino, Fumihiko; Hagihara, Kenichi

    This paper presents a fast method capable of accelerating the Smith-Waterman algorithm for biological database search on a cluster of graphics processing units (GPUs). Our method is implemented using compute unified device architecture (CUDA), which is available on the nVIDIA GPU. As compared with previous methods, our method has four major contributions. (1) The method efficiently uses on-chip shared memory to reduce the data amount being transferred between off-chip video memory and processing elements in the GPU. (2) It also reduces the number of data fetches by applying a data reuse technique to query and database sequences. (3) A pipelined method is also implemented to overlap GPU execution with database access. (4) Finally, a master/worker paradigm is employed to accelerate hundreds of database searches on a cluster system. In experiments, the peak performance on a GeForce GTX 280 card reaches 8.32 giga cell updates per second (GCUPS). We also find that our method reduces the amount of data fetches to 1/140, achieving approximately three times higher performance than a previous CUDA-based method. Our 32-node cluster version is approximately 28 times faster than a single GPU version. Furthermore, the effective performance reaches 75.6 giga instructions per second (GIPS) using 32 GeForce 8800 GTX cards.

  5. qPMS9: An Efficient Algorithm for Quorum Planted Motif Search

    NASA Astrophysics Data System (ADS)

    Nicolae, Marius; Rajasekaran, Sanguthevar

    2015-01-01

    Discovering patterns in biological sequences is a crucial problem. For example, the identification of patterns in DNA sequences has resulted in the determination of open reading frames, identification of gene promoter elements, intron/exon splicing sites, and SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have led to domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, discovery of short functional motifs, etc. In this paper we focus on the identification of an important class of patterns, namely, motifs. We study the (l, d) motif search problem or Planted Motif Search (PMS). PMS receives as input n strings and two integers l and d. It returns all sequences M of length l that occur in each input string, where each occurrence differs from M in at most d positions. Another formulation is quorum PMS (qPMS), where the motif appears in at least q% of the strings. We introduce qPMS9, a parallel exact qPMS algorithm that offers significant runtime improvements on DNA and protein datasets. qPMS9 solves the challenging DNA (l, d)-instances (28, 12) and (30, 13). The source code is available at https://code.google.com/p/qpms9/.

  6. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm.

    PubMed

    Cuevas, Erik; Díaz, Margarita

    2015-01-01

    In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness. PMID:26339228

  7. A Method for Estimating View Transformations from Image Correspondences Based on the Harmony Search Algorithm

    PubMed Central

    Cuevas, Erik; Díaz, Margarita

    2015-01-01

    In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness. PMID:26339228

  8. A Scalable Distributed Parallel Breadth-First Search Algorithm on BlueGene/L

    SciTech Connect

    Yoo, A; Chow, E; Henderson, K; McLendon, W; Hendrickson, B; Catalyurek, U

    2005-07-19

    Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadth-first search (BFS) scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability was tested on IBM BlueGene/L with 32,768 nodes at the Lawrence Livermore National Laboratory. Scalability was obtained through a series of optimizations, in particular, those that ensure scalable use of memory. We use 2D (edge) partitioning of the graph instead of conventional 1D (vertex) partitioning to reduce communication overhead. For Poisson random graphs, we show that the expected size of the messages is scalable for both 2D and 1D partitionings. Finally, we have developed efficient collective communication functions for the 3D torus architecture of BlueGene/L that also take advantage of the structure in the problem. The performance and characteristics of the algorithm are measured and reported.

  9. A trust-based sensor allocation algorithm in cooperative space search problems

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik

    2011-06-01

    Sensor allocation is an important and challenging problem within the field of multi-agent systems. The sensor allocation problem involves deciding how to assign a number of targets or cells to a set of agents according to some allocation protocol. Generally, in order to make efficient allocations, we need to design mechanisms that consider both the task performers' costs for the service and the associated probability of success (POS). In our problem, the costs are the used sensor resource, and the POS is the target tracking performance. Usually, POS may be perceived differently by different agents because they typically have different standards or means of evaluating the performance of their counterparts (other sensors in the search and tracking problem). Given this, we turn to the notion of trust to capture such subjective perceptions. In our approach, we develop a trust model to construct a novel mechanism that motivates sensor agents to limit their greediness or selfishness. Then we model the sensor allocation optimization problem with trust-in-loop negotiation game and solve it using a sub-game perfect equilibrium. Numerical simulations are performed to demonstrate the trust-based sensor allocation algorithm in cooperative space situation awareness (SSA) search problems.

  10. XLSearch: a Probabilistic Database Search Algorithm for Identifying Cross-Linked Peptides.

    PubMed

    Ji, Chao; Li, Sujun; Reilly, James P; Radivojac, Predrag; Tang, Haixu

    2016-06-01

    Chemical cross-linking combined with mass spectrometric analysis has become an important technique for probing protein three-dimensional structure and protein-protein interactions. A key step in this process is the accurate identification and validation of cross-linked peptides from tandem mass spectra. The identification of cross-linked peptides, however, presents challenges related to the expanded nature of the search space (all pairs of peptides in a sequence database) and the fact that some peptide-spectrum matches (PSMs) contain one correct and one incorrect peptide but often receive scores that are comparable to those in which both peptides are correctly identified. To address these problems and improve detection of cross-linked peptides, we propose a new database search algorithm, XLSearch, for identifying cross-linked peptides. Our approach is based on a data-driven scoring scheme that independently estimates the probability of correctly identifying each individual peptide in the cross-link given knowledge of the correct or incorrect identification of the other peptide. These conditional probabilities are subsequently used to estimate the joint posterior probability that both peptides are correctly identified. Using the data from two previous cross-link studies, we show the effectiveness of this scoring scheme, particularly in distinguishing between true identifications and those containing one incorrect peptide. We also provide evidence that XLSearch achieves more identifications than two alternative methods at the same false discovery rate (availability: https://github.com/COL-IU/XLSearch ). PMID:27068484

  11. Searching for closely related ligands with different mechanisms of action using machine learning and mapping algorithms.

    PubMed

    Balfer, Jenny; Vogt, Martin; Bajorath, Jürgen

    2013-09-23

    Supervised machine learning approaches, including support vector machines, random forests, Bayesian classifiers, nearest-neighbor similarity searching, and a conceptually distinct mapping algorithm termed DynaMAD, have been investigated for their ability to detect structurally related ligands of a given receptor with different mechanisms of action. For this purpose, a large number of simulated virtual screening trials were carried out with models trained on mechanistic subsets of different classes of receptor ligands. The results revealed that ligands with the desired mechanism of action were frequently contained in database selection sets of limited size. All machine learning approaches successfully detected mechanistic subsets of ligands in a large background database of druglike compounds. However, the early enrichment characteristics considerably differed. Overall, random forests of relatively simple design and support vector machines with Gaussian kernels (Gaussian SVMs) displayed the highest search performance. In addition, DynaMAD was found to yield very small selection sets comprising only ~10 compounds that also contained ligands with the desired mechanism of action. Random forest, Gaussian SVM, and DynaMAD calculations revealed an enrichment of compounds with the desired mechanism over other mechanistic subsets. PMID:23952618

  12. Index Fund Optimization Using a Genetic Algorithm and a Heuristic Local Search

    NASA Astrophysics Data System (ADS)

    Orito, Yukiko; Inoguchi, Manabu; Yamamoto, Hisashi

    It is well known that index funds are popular passively managed portfolios and have been used very extensively for the hedge trading. Index funds consist of a certain number of stocks of listed companies on a stock market such that the fund's return rates follow a similar path to the changing rates of the market indices. However it is hard to make a perfect index fund consisting of all companies included in the given market index. Thus, the index fund optimization can be viewed as a combinatorial optimization for portfolio managements. In this paper, we propose an optimization method that consists of a genetic algorithm and a heuristic local search algorithm to make strong linear association between the fund's return rates and the changing rates of market index. We apply the method to the Tokyo Stock Exchange and make index funds whose return rates follow a similar path to the changing rates of Tokyo Stock Price Index (TOPIX). The results show that our proposal method makes the index funds with strong linear association to the market index by small computing time.

  13. Development of optimization model for sputtering process parameter based on gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Norlina, M. S.; Diyana, M. S. Nor; Mazidah, P.; Rusop, M.

    2016-07-01

    In the RF magnetron sputtering process, the desirable layer properties are largely influenced by the process parameters and conditions. If the quality of the thin film has not reached up to its intended level, the experiments have to be repeated until the desirable quality has been met. This research is proposing Gravitational Search Algorithm (GSA) as the optimization model to reduce the time and cost to be spent in the thin film fabrication. The optimization model's engine has been developed using Java. The model is developed based on GSA concept, which is inspired by the Newtonian laws of gravity and motion. In this research, the model is expected to optimize four deposition parameters which are RF power, deposition time, oxygen flow rate and substrate temperature. The results have turned out to be promising and it could be concluded that the performance of the model is satisfying in this parameter optimization problem. Future work could compare GSA with other nature based algorithms and test them with various set of data.

  14. Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks

    PubMed Central

    Akram, Vahid Khalilpour; Dagdeviren, Orhan

    2013-01-01

    Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930

  15. The GSAM software: A global search algorithm of minima exploration for the investigation of low lying isomers of clusters

    SciTech Connect

    Marchal, Rémi; Carbonnière, Philippe; Pouchan, Claude

    2015-01-22

    The study of atomic clusters has become an increasingly active area of research in the recent years because of the fundamental interest in studying a completely new area that can bridge the gap between atomic and solid state physics. Due to their specific properties, such compounds are of great interest in the field of nanotechnology [1,2]. Here, we would present our GSAM algorithm based on a DFT exploration of the PES to find the low lying isomers of such compounds. This algorithm includes the generation of an intial set of structure from which the most relevant are selected. Moreover, an optimization process, called raking optimization, able to discard step by step all the non physically reasonnable configurations have been implemented to reduce the computational cost of this algorithm. Structural properties of Ga{sub n}Asm clusters will be presented as an illustration of the method.

  16. Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) Applied in Optimization of Radiation Pattern Control of Phased-Array Radars for Rocket Tracking Systems

    PubMed Central

    Silva, Leonardo W. T.; Barros, Vitor F.; Silva, Sandro G.

    2014-01-01

    In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence. PMID:25196013

  17. Genetic algorithm with maximum-minimum crossover (GA-MMC) applied in optimization of radiation pattern control of phased-array radars for rocket tracking systems.

    PubMed

    Silva, Leonardo W T; Barros, Vitor F; Silva, Sandro G

    2014-08-18

    In launching operations, Rocket Tracking Systems (RTS) process the trajectory data obtained by radar sensors. In order to improve functionality and maintenance, radars can be upgraded by replacing antennas with parabolic reflectors (PRs) with phased arrays (PAs). These arrays enable the electronic control of the radiation pattern by adjusting the signal supplied to each radiating element. However, in projects of phased array radars (PARs), the modeling of the problem is subject to various combinations of excitation signals producing a complex optimization problem. In this case, it is possible to calculate the problem solutions with optimization methods such as genetic algorithms (GAs). For this, the Genetic Algorithm with Maximum-Minimum Crossover (GA-MMC) method was developed to control the radiation pattern of PAs. The GA-MMC uses a reconfigurable algorithm with multiple objectives, differentiated coding and a new crossover genetic operator. This operator has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, GA-MMC was successful in more than 90% of the tests for each application, increased the fitness of the final population by more than 20% and reduced the premature convergence.

  18. Genetic Algorithm for Optimization: Preprocessor and Algorithm

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam A.

    2006-01-01

    Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.

  19. PSimScan: Algorithm and Utility for Fast Protein Similarity Search

    PubMed Central

    Kaznadzey, Anna; Alexandrova, Natalia; Novichkov, Vladimir; Kaznadzey, Denis

    2013-01-01

    In the era of metagenomics and diagnostics sequencing, the importance of protein comparison methods of boosted performance cannot be overstated. Here we present PSimScan (Protein Similarity Scanner), a flexible open source protein similarity search tool which provides a significant gain in speed compared to BLASTP at the price of controlled sensitivity loss. The PSimScan algorithm introduces a number of novel performance optimization methods that can be further used by the community to improve the speed and lower hardware requirements of bioinformatics software. The optimization starts at the lookup table construction, then the initial lookup table–based hits are passed through a pipeline of filtering and aggregation routines of increasing computational complexity. The first step in this pipeline is a novel algorithm that builds and selects ‘similarity zones’ aggregated from neighboring matches on small arrays of adjacent diagonals. PSimScan performs 5 to 100 times faster than the standard NCBI BLASTP, depending on chosen parameters, and runs on commodity hardware. Its sensitivity and selectivity at the slowest settings are comparable to the NCBI BLASTP’s and decrease with the increase of speed, yet stay at the levels reasonable for many tasks. PSimScan is most advantageous when used on large collections of query sequences. Comparing the entire proteome of Streptocuccus pneumoniae (2,042 proteins) to the NCBI’s non-redundant protein database of 16,971,855 records takes 6.5 hours on a moderately powerful PC, while the same task with the NCBI BLASTP takes over 66 hours. We describe innovations in the PSimScan algorithm in considerable detail to encourage bioinformaticians to improve on the tool and to use the innovations in their own software development. PMID:23505522

  20. Application of GA in optimization of pore network models generated by multi-cellular growth algorithms

    NASA Astrophysics Data System (ADS)

    Jamshidi, Saeid; Boozarjomehry, Ramin Bozorgmehry; Pishvaie, Mahmoud Reza

    2009-10-01

    In pore network modeling, the void space of a rock sample is represented at the microscopic scale by a network of pores connected by throats. Construction of a reasonable representation of the geometry and topology of the pore space will lead to a reliable prediction of the properties of porous media. Recently, the theory of multi-cellular growth (or L-systems) has been used as a flexible tool for generation of pore network models which do not require any special information such as 2D SEM or 3D pore space images. In general, the networks generated by this method are irregular pore network models which are inherently closer to the complicated nature of the porous media rather than regular lattice networks. In this approach, the construction process is controlled only by the production rules that govern the development process of the network. In this study, genetic algorithm has been used to obtain the optimum values of the uncertain parameters of these production rules to build an appropriate irregular lattice network capable of the prediction of both static and hydraulic information of the target porous medium.

  1. Earliest Life on Earth Preserved in Hotspring Deposits: Evidence from the 3.5 Ga Dresser Formation, Pilbara Craton, Australia, and Implications for the Search for Life on Mars

    NASA Astrophysics Data System (ADS)

    Van Kranendonk, M. J.; Djokic, T.; Campbell, K. A.; Walter, M. R.; Oto, T.; Nakamura, E.

    2016-05-01

    A variety of biosignatures preserved in hotspring facies from the c. 3.5 Ga Dresser Formation, Australia, lends support to an origin of life in terrestrial hotsprings, and have profound implications for the search for life on Mars.

  2. A generating set direct search augmented Lagrangian algorithm for optimization with a combination of general and linear constraints.

    SciTech Connect

    Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson

    2006-08-01

    We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.

  3. Genetic algorithms for modelling and optimisation

    NASA Astrophysics Data System (ADS)

    McCall, John

    2005-12-01

    Genetic algorithms (GAs) are a heuristic search and optimisation technique inspired by natural evolution. They have been successfully applied to a wide range of real-world problems of significant complexity. This paper is intended as an introduction to GAs aimed at immunologists and mathematicians interested in immunology. We describe how to construct a GA and the main strands of GA theory before speculatively identifying possible applications of GAs to the study of immunology. An illustrative example of using a GA for a medical optimal control problem is provided. The paper also includes a brief account of the related area of artificial immune systems.

  4. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    PubMed

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  5. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment.

    PubMed

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan.

  6. Hybrid Symbiotic Organisms Search Optimization Algorithm for Scheduling of Tasks on Cloud Computing Environment

    PubMed Central

    Abdullahi, Mohammed; Ngadi, Md Asri

    2016-01-01

    Cloud computing has attracted significant attention from research community because of rapid migration rate of Information Technology services to its domain. Advances in virtualization technology has made cloud computing very popular as a result of easier deployment of application services. Tasks are submitted to cloud datacenters to be processed on pay as you go fashion. Task scheduling is one the significant research challenges in cloud computing environment. The current formulation of task scheduling problems has been shown to be NP-complete, hence finding the exact solution especially for large problem sizes is intractable. The heterogeneous and dynamic feature of cloud resources makes optimum task scheduling non-trivial. Therefore, efficient task scheduling algorithms are required for optimum resource utilization. Symbiotic Organisms Search (SOS) has been shown to perform competitively with Particle Swarm Optimization (PSO). The aim of this study is to optimize task scheduling in cloud computing environment based on a proposed Simulated Annealing (SA) based SOS (SASOS) in order to improve the convergence rate and quality of solution of SOS. The SOS algorithm has a strong global exploration capability and uses fewer parameters. The systematic reasoning ability of SA is employed to find better solutions on local solution regions, hence, adding exploration ability to SOS. Also, a fitness function is proposed which takes into account the utilization level of virtual machines (VMs) which reduced makespan and degree of imbalance among VMs. CloudSim toolkit was used to evaluate the efficiency of the proposed method using both synthetic and standard workload. Results of simulation showed that hybrid SOS performs better than SOS in terms of convergence speed, response time, degree of imbalance, and makespan. PMID:27348127

  7. [Hyper spectral estimation method for soil alkali hydrolysable nitrogen content based on discrete wavelet transform and genetic algorithm in combining with partial least squares DWT-GA-PLS)].

    PubMed

    Chen, Hong-Yan; Zhao, Geng-Xing; Li, Xi-Can; Wang, Xiang-Feng; Li, Yu-Ling

    2013-11-01

    Taking the Qihe County in Shandong Province of East China as the study area, soil samples were collected from the field, and based on the hyperspectral reflectance measurement of the soil samples and the transformation with the first deviation, the spectra were denoised and compressed by discrete wavelet transform (DWT), the variables for the soil alkali hydrolysable nitrogen quantitative estimation models were selected by genetic algorithms (GA), and the estimation models for the soil alkali hydrolysable nitrogen content were built by using partial least squares (PLS) regression. The discrete wavelet transform and genetic algorithm in combining with partial least squares (DWT-GA-PLS) could not only compress the spectrum variables and reduce the model variables, but also improve the quantitative estimation accuracy of soil alkali hydrolysable nitrogen content. Based on the 1-2 levels low frequency coefficients of discrete wavelet transform, and under the condition of large scale decrement of spectrum variables, the calibration models could achieve the higher or the same prediction accuracy as the soil full spectra. The model based on the second level low frequency coefficients had the highest precision, with the model predicting R2 being 0.85, the RMSE being 8.11 mg x kg(-1), and RPD being 2.53, indicating the effectiveness of DWT-GA-PLS method in estimating soil alkali hydrolysable nitrogen content.

  8. A constraint-based search algorithm for parameter identification of environmental models

    NASA Astrophysics Data System (ADS)

    Gharari, S.; Shafiei, M.; Hrachowitz, M.; Kumar, R.; Fenicia, F.; Gupta, H. V.; Savenije, H. H. G.

    2014-12-01

    Many environmental systems models, such as conceptual rainfall-runoff models, rely on model calibration for parameter identification. For this, an observed output time series (such as runoff) is needed, but frequently not available (e.g., when making predictions in ungauged basins). In this study, we provide an alternative approach for parameter identification using constraints based on two types of restrictions derived from prior (or expert) knowledge. The first, called parameter constraints, restricts the solution space based on realistic relationships that must hold between the different model parameters while the second, called process constraints requires that additional realism relationships between the fluxes and state variables must be satisfied. Specifically, we propose a search algorithm for finding parameter sets that simultaneously satisfy such constraints, based on stepwise sampling of the parameter space. Such parameter sets have the desirable property of being consistent with the modeler's intuition of how the catchment functions, and can (if necessary) serve as prior information for further investigations by reducing the prior uncertainties associated with both calibration and prediction.

  9. Gravitational search algorithm based tuning of a PI speed controller for an induction motor drive

    NASA Astrophysics Data System (ADS)

    Abd Ali, Jamal; Hannan, M. A.; Mohamed, Azah

    2016-03-01

    Proportional-integral (PI)-controller is very useful for controlling speed and mechanical load variables for the three-phase induction motor (TIM) operation. However, the conventional PI-controller has a very exhaustive trial and error procedure for obtaining it is parameters. In this paper, PI speed controller has been improved in it is design technique to suite TIM by utilizing a gravitational search algorithm (GSA) optimization technique. The mean absolute error (MAE) of the speed response has been used as an objective function. An optimal GSA based PI speed controller (GSA-PI) objective function is also employed to tune and minimize the MAE for developing the performance of the TIM in terms of changes speed and mechanical load. This experiment use space vector pulse width modulation (SVPWM) technique to create pulse width modulation for switching devices for three phase bridge inverter. Results obtained from the GSA-PI speed controller are compared with those obtained through particle swarm optimization (PSO) to validate the developed controller. Then it has been proved that the robustness of the GSA-PI speed controller is far better than that of the1 PSO controller in all tested cases in terms of damping capability and transient response under different mechanical loads and speeds.

  10. The Scatter Search Based Algorithm to Revenue Management Problem in Broadcasting Companies

    NASA Astrophysics Data System (ADS)

    Pishdad, Arezoo; Sharifyazdi, Mehdi; Karimpour, Reza

    2009-09-01

    The problem under question in this paper which is faced by broadcasting companies is how to benefit from a limited advertising space. This problem is due to the stochastic behavior of customers (advertiser) in different fare classes. To address this issue we propose a mathematical constrained nonlinear multi period model which incorporates cancellation and overbooking. The objective function is to maximize the total expected revenue and our numerical method performs it by determining the sales limits for each class of customer to present the revenue management control policy. Scheduling the advertising spots in breaks is another area of concern and we consider it as a constraint in our model. In this paper an algorithm based on Scatter search is developed to acquire a good feasible solution. This method uses simulation over customer arrival and in a continuous finite time horizon [0, T]. Several sensitivity analyses are conducted in computational result for depicting the effectiveness of proposed method. It also provides insight into better results of considering revenue management (control policy) compared to "no sales limit" policy in which sooner demand will served first.

  11. FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm

    PubMed Central

    Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; Zhang, Yuanyuan; Liu, Zhaowen

    2016-01-01

    Motivation Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS). Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models. Method In this study, two scoring functions (Bayesian network based K2-score and Gini-score) are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA) is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models. Results We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE) which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR), specificity (SPC), positive predictive value (PPV) and accuracy (ACC). Our method has identified two SNPs (rs3775652 and rs10511467) that may be also associated with disease in AMD dataset. PMID:27014873

  12. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this

  13. In search of the noble gas 3.52 Ga atmospheric signatures

    NASA Astrophysics Data System (ADS)

    Pujol, M.; Marty, B.; Philippot, P.

    2008-12-01

    The isotopic signatures of noble gases in the Present-day mantle and in the atmosphere permit exceptional insight into the evolution of these reservoirs through time ([1]). However, related exchange models are under- constrained and would require direct measurements of the atmospheric composition long ago, e.g., in the Archaean. Drilling in the the 3.52 Ga chert-barite ([2]) of the Dresser formation(Pilbara Drilling Project) , North Pole, Pilbara craton (Western Australia), led to recovery of exceptionally fresh samples preserving primary fluid inclusions unaffected by surface weathering. The whole formation is considered to be an already established basin when hydrothermal processes started. The chemical composition of primary fluid inclusions trapped in hydrothermal quartz from vacuolar komatiitic basalt from 110 m depth were determined by synchrotron X-ray microfluorescence (ESRF, Grenoble,France). Data show that fluids are relatively homogenous, consisting of a Ba-rich fluid and a Fe (+Ba)-rich fluid of hydrothermal origin as concluded by Foriel et al.([3]). The isotopic compositions of xenon and argon trapped in these fluids were measured by mass spectrometry following vacuum crushing. The three argon isotopes show a homogeneous signature quite different from present-day Earth atmosphere but we cannot exclude the possibility that secondary nuclear reactions produced these anomalies. Despite this, the Xe isotopic trends indicate a less radiogenic signature than the Present-day atmosphere, and probably represent a remnant of the Archaean atmosphere. If this xenon composition is primitive then it implies that there is no cosmogenic production through time. However, argon ratios could be explained by cosmogenic production which implies significant surface exposure times. Cosmogenic production will produce correlated argon and xenon isotope signatures. Therefore it is necessary to differentiate primary from secondary composition. To investigate the effects of these

  14. Genetic algorithms in adaptive fuzzy control

    NASA Technical Reports Server (NTRS)

    Karr, C. Lucas; Harper, Tony R.

    1992-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust fuzzy membership functions in response to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific computer-simulated chemical system is used to demonstrate the ideas presented.

  15. MT's algorithm: A new algorithm to search for the optimum set of modulation indices for simultaneous range, command, and telemetry

    NASA Technical Reports Server (NTRS)

    Nguyen, Tien Manh

    1989-01-01

    MT's algorithm was developed as an aid in the design of space telecommunications systems when utilized with simultaneous range/command/telemetry operations. This algorithm provides selection of modulation indices for: (1) suppression of undesired signals to achieve desired link performance margins and/or to allow for a specified performance degradation in the data channel (command/telemetry) due to the presence of undesired signals (interferers); and (2) optimum power division between the carrier, the range, and the data channel. A software program using this algorithm was developed for use with MathCAD software. This software program, called the MT program, provides the computation of optimum modulation indices for all possible cases that are recommended by the Consultative Committee on Space Data System (CCSDS) (with emphasis on the squarewave, NASA/JPL ranging system).

  16. EEG/ERP adaptive noise canceller design with controlled search space (CSS) approach in cuckoo and other optimization algorithms.

    PubMed

    Ahirwal, M K; Kumar, Anil; Singh, G K

    2013-01-01

    This paper explores the migration of adaptive filtering with swarm intelligence/evolutionary techniques employed in the field of electroencephalogram/event-related potential noise cancellation or extraction. A new approach is proposed in the form of controlled search space to stabilize the randomness of swarm intelligence techniques especially for the EEG signal. Swarm-based algorithms such as Particles Swarm Optimization, Artificial Bee Colony, and Cuckoo Optimization Algorithm with their variants are implemented to design optimized adaptive noise canceler. The proposed controlled search space technique is tested on each of the swarm intelligence techniques and is found to be more accurate and powerful. Adaptive noise canceler with traditional algorithms such as least-mean-square, normalized least-mean-square, and recursive least-mean-square algorithms are also implemented to compare the results. ERP signals such as simulated visual evoked potential, real visual evoked potential, and real sensorimotor evoked potential are used, due to their physiological importance in various EEG studies. Average computational time and shape measures of evolutionary techniques are observed 8.21E-01 sec and 1.73E-01, respectively. Though, traditional algorithms take negligible time consumption, but are unable to offer good shape preservation of ERP, noticed as average computational time and shape measure difference, 1.41E-02 sec and 2.60E+00, respectively.

  17. Parallel and Preemptable Dynamically Dimensioned Search Algorithms for Single and Multi-objective Optimization in Water Resources

    NASA Astrophysics Data System (ADS)

    Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.

    2015-12-01

    We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance

  18. Cooperative search and rescue with artificial fishes based on fish-swarm algorithm for underwater wireless sensor networks.

    PubMed

    Zhao, Wei; Tang, Zhenmin; Yang, Yuwang; Wang, Lei; Lan, Shaohua

    2014-01-01

    This paper presents a searching control approach for cooperating mobile sensor networks. We use a density function to represent the frequency of distress signals issued by victims. The mobile nodes' moving in mission space is similar to the behaviors of fish-swarm in water. So, we take the mobile node as artificial fish node and define its operations by a probabilistic model over a limited range. A fish-swarm based algorithm is designed requiring local information at each fish node and maximizing the joint detection probabilities of distress signals. Optimization of formation is also considered for the searching control approach and is optimized by fish-swarm algorithm. Simulation results include two schemes: preset route and random walks, and it is showed that the control scheme has adaptive and effective properties. PMID:24741341

  19. Cooperative search and rescue with artificial fishes based on fish-swarm algorithm for underwater wireless sensor networks.

    PubMed

    Zhao, Wei; Tang, Zhenmin; Yang, Yuwang; Wang, Lei; Lan, Shaohua

    2014-01-01

    This paper presents a searching control approach for cooperating mobile sensor networks. We use a density function to represent the frequency of distress signals issued by victims. The mobile nodes' moving in mission space is similar to the behaviors of fish-swarm in water. So, we take the mobile node as artificial fish node and define its operations by a probabilistic model over a limited range. A fish-swarm based algorithm is designed requiring local information at each fish node and maximizing the joint detection probabilities of distress signals. Optimization of formation is also considered for the searching control approach and is optimized by fish-swarm algorithm. Simulation results include two schemes: preset route and random walks, and it is showed that the control scheme has adaptive and effective properties.

  20. Cooperative Search and Rescue with Artificial Fishes Based on Fish-Swarm Algorithm for Underwater Wireless Sensor Networks

    PubMed Central

    Zhao, Wei; Tang, Zhenmin; Yang, Yuwang; Wang, Lei; Lan, Shaohua

    2014-01-01

    This paper presents a searching control approach for cooperating mobile sensor networks. We use a density function to represent the frequency of distress signals issued by victims. The mobile nodes' moving in mission space is similar to the behaviors of fish-swarm in water. So, we take the mobile node as artificial fish node and define its operations by a probabilistic model over a limited range. A fish-swarm based algorithm is designed requiring local information at each fish node and maximizing the joint detection probabilities of distress signals. Optimization of formation is also considered for the searching control approach and is optimized by fish-swarm algorithm. Simulation results include two schemes: preset route and random walks, and it is showed that the control scheme has adaptive and effective properties. PMID:24741341

  1. Traveling front solutions to directed diffusion-limited aggregation, digital search trees, and the Lempel-Ziv data compression algorithm

    NASA Astrophysics Data System (ADS)

    Majumdar, Satya N.

    2003-08-01

    We use the traveling front approach to derive exact asymptotic results for the statistics of the number of particles in a class of directed diffusion-limited aggregation models on a Cayley tree. We point out that some aspects of these models are closely connected to two different problems in computer science, namely, the digital search tree problem in data structures and the Lempel-Ziv algorithm for data compression. The statistics of the number of particles studied here is related to the statistics of height in digital search trees which, in turn, is related to the statistics of the length of the longest word formed by the Lempel-Ziv algorithm. Implications of our results to these computer science problems are pointed out.

  2. Optimization of Spherical Roller Bearing Design Using Artificial Bee Colony Algorithm and Grid Search Method

    NASA Astrophysics Data System (ADS)

    Tiwari, Rajiv; Waghole, Vikas

    2015-07-01

    Bearing standards impose restrictions on the internal geometry of spherical roller bearings. Geometrical and strength constraints conditions have been formulated for the optimization of bearing design. The long fatigue life is one of the most important criteria in the optimum design of bearing. The life is directly proportional to the dynamic capacity; hence, the objective function has been chosen as the maximization of dynamic capacity. The effect of speed and static loads acting on the bearing are also taken into account. Design variables for the bearing include five geometrical parameters: the roller diameter, the roller length, the bearing pitch diameter, the number of rollers, and the contact angle. There are a few design constraint parameters which are also included in the optimization, the bounds of which are obtained by initial runs of the optimization. The optimization program is made to run for different values of these design constraint parameters and a range of the parameters is obtained for which the objective function has a higher value. The artificial bee colony algorithm (ABCA) has been used to solve the constrained optimized problem and the optimum design is compared with the one obtained from the grid search method (GSM), both operating independently. Both the ABCA and the GSM have been finally combined together to reach the global optimum point. A constraint violation study has also been carried out to give priority to the constraint having greater possibility of violations. Optimized bearing designs show a better performance parameter with those specified in bearing catalogs. The sensitivity analysis of bearing parameters has also been carried out to see the effect of manufacturing tolerance on the objective function.

  3. Design of Content Based Image Retrieval Scheme for Diabetic Retinopathy Images using Harmony Search Algorithm.

    PubMed

    Sivakamasundari, J; Natarajan, V

    2015-01-01

    Diabetic Retinopathy (DR) is a disorder that affects the structure of retinal blood vessels due to long-standing diabetes mellitus. Automated segmentation of blood vessel is vital for periodic screening and timely diagnosis. An attempt has been made to generate continuous retinal vasculature for the design of Content Based Image Retrieval (CBIR) application. The typical normal and abnormal retinal images are preprocessed to improve the vessel contrast. The blood vessels are segmented using evolutionary based Harmony Search Algorithm (HSA) combined with Otsu Multilevel Thresholding (MLT) method by best objective functions. The segmentation results are validated with corresponding ground truth images using binary similarity measures. The statistical, textural and structural features are obtained from the segmented images of normal and DR affected retina and are analyzed. CBIR in medical image retrieval applications are used to assist physicians in clinical decision-support techniques and research fields. A CBIR system is developed using HSA based Otsu MLT segmentation technique and the features obtained from the segmented images. Similarity matching is carried out between the features of query and database images using Euclidean Distance measure. Similar images are ranked and retrieved. The retrieval performance of CBIR system is evaluated in terms of precision and recall. The CBIR systems developed using HSA based Otsu MLT and conventional Otsu MLT methods are compared. The retrieval performance such as precision and recall are found to be 96% and 58% for CBIR system using HSA based Otsu MLT segmentation. This automated CBIR system could be recommended for use in computer assisted diagnosis for diabetic retinopathy screening.

  4. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    NASA Astrophysics Data System (ADS)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  5. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins. PMID:25069136

  6. The Hybrid Feature Selection Algorithm Based on Maximum Minimum Backward Selection Search Strategy for Liver Tissue Pathological Image Classification

    PubMed Central

    2016-01-01

    We propose a novel feature selection algorithm for liver tissue pathological image classification. To improve the efficiency of feature selection, the same feature values of positive and negative samples are removed in rough selection. To obtain the optimal feature subset, a new heuristic search algorithm, which is called Maximum Minimum Backward Selection (MMBS), is proposed in precise selection. MMBS search strategy has the following advantages. (1) For the deficiency of Discernibility of Feature Subsets (DFS) evaluation criteria, which makes the class of small samples invalid for unbalanced samples, the Weighted Discernibility of Feature Subsets (WDFS) evaluation criteria are proposed as the evaluation strategy of MMBS, which is also available for unbalanced samples. (2) For the deficiency of Sequential Forward Selection (SFS) and Sequential Backward Selection (SBS), which can only add or only delete feature, MMBS decides whether to add the feature to feature subset according to WDFS criteria for each feature firstly; then it decides whether to remove the feature from feature subset according to SBS algorithm. In this way, the better feature subset can be obtained. The experiment results show that the proposed hybrid feature selection algorithm has good classification performance for liver tissue pathological image. PMID:27563344

  7. Searching for Repeats, as an Example of Using the Generalized Ruzzo-Tompa Algorithm to Find Optimal Subsequences with Gaps

    PubMed Central

    Mariño-Ramírez, Leonardo; Sheetlin, Sergey L.

    2014-01-01

    Background Some biological sequences contain subsequences of unusual composition, e.g., some proteins contain DNA binding domains, transmembrane regions, and charged regions; and some DNA sequences contain repeats. Requiring time linear in the length of an input sequence, the Ruzzo-Tompa (RT) Algorithm finds subsequences of unusual composition, using a sequence of scores as input and the corresponding “maximal segments” as output. (Loosely, maximal segments are the contiguous subsequences having greatest total score.) Just as gaps improved the sensitivity of BLAST, in principle gaps could help tune other tools, to improve sensitivity when searching for subsequences of unusual composition. Results Call a graph whose vertices are totally ordered a “totally ordered graph”. In a totally ordered graph, call a path whose vertices are in increasing order an “increasing path”. The input of the RT Algorithm can be generalized to a finite, totally ordered, weighted graph, so the algorithm then locates maximal segments, corresponding to increasing paths of maximal weight. The generalization permits penalized deletion of unfavorable letters from contiguous subsequences, so the generalized Ruzzo-Tompa algorithm can find subsequences with greatest total gapped scores. The search for inexact simple repeats in DNA exemplifies some of the concepts. For some limited types of repeats, RepWords, a repeat-finding tool based on the principled use of the Ruzzo-Tompa algorithm, performed better than a similar extant tool. Conclusions With minimal programming effort, the generalization of the Ruzzo-Tompa algorithm given in this article could improve the performance of many programs for finding biological subsequences of unusual composition. PMID:24989859

  8. A search for spin-polarized photoemission from GaAs using light with orbital angular momentum

    SciTech Connect

    Nathan Clayburn, James McCarter, Joan Dreiling, Bernard Poelker, Dominic Ryan, Timothy Gay

    2013-01-01

    Laser light with photon energy near the bandgap of GaAs and with different amounts of orbital angular momentum was used to produce photoemission from unstrained GaAs. The degree of electron spin polarization was measured using a micro-Mott polarimeter and found to be consistent with zero with an upper limit of ~3% for light with up to ±5{bar h} of orbital angular momentum. In contrast, the degree of spin polarization was 32.32 ± 1.35% using circularly-polarized laser light at the same wavelength, which is typical of bulk GaAs.

  9. ISPTM: an iterative search algorithm for systematic identification of post-translational modifications from complex proteome mixtures.

    PubMed

    Huang, Xin; Huang, Lin; Peng, Hong; Guru, Ashu; Xue, Weihua; Hong, Sang Yong; Liu, Miao; Sharma, Seema; Fu, Kai; Caprez, Adam P; Swanson, David R; Zhang, Zhixin; Ding, Shi-Jian

    2013-09-01

    Identifying protein post-translational modifications (PTMs) from tandem mass spectrometry data of complex proteome mixtures is a highly challenging task. Here we present a new strategy, named iterative search for identifying PTMs (ISPTM), for tackling this challenge. The ISPTM approach consists of a basic search with no variable modification, followed by iterative searches of many PTMs using a small number of them (usually two) in each search. The performance of the ISPTM approach was evaluated on mixtures of 70 synthetic peptides with known modifications, on an 18-protein standard mixture with unknown modifications and on real, complex biological samples of mouse nuclear matrix proteins with unknown modifications. ISPTM revealed that many chemical PTMs were introduced by urea and iodoacetamide during sample preparation and many biological PTMs, including dimethylation of arginine and lysine, were significantly activated by Adriamycin treatment in nuclear matrix associated proteins. ISPTM increased the MS/MS spectral identification rate substantially, displayed significantly better sensitivity for systematic PTM identification compared with that of the conventional all-in-one search approach, and offered PTM identification results that were complementary to InsPecT and MODa, both of which are established PTM identification algorithms. In summary, ISPTM is a new and powerful tool for unbiased identification of many different PTMs with high confidence from complex proteome mixtures.

  10. ISPTM: an Iterative Search Algorithm for Systematic Identification of Post-translational Modifications from Complex Proteome Mixtures

    PubMed Central

    Huang, Xin; Huang, Lin; Peng, Hong; Guru, Ashu; Xue, Weihua; Hong, Sang Yong; Liu, Miao; Sharma, Seema; Fu, Kai; Caprez, Adam; Swanson, David; Zhang, Zhixin; Ding, Shi-Jian

    2013-01-01

    Identifying protein post-translational modifications (PTMs) from tandem mass spectrometry data of complex proteome mixtures is a highly challenging task. Here we present a new strategy, named iterative search for identifying PTMs (ISPTM), for tackling this challenge. The ISPTM approach consists of a basic search with no variable modification, followed by iterative searches of many PTMs using a small number of them (usually two) in each search. The performance of the ISPTM approach was evaluated on mixtures of 70 synthetic peptides with known modifications, on an 18-protein standard mixture with unknown modifications and on real, complex biological samples of mouse nuclear matrix proteins with unknown modifications. ISPTM revealed that many chemical PTMs were introduced by urea and iodoacetamide during sample preparation and many biological PTMs, including dimethylation of arginine and lysine, were significantly activated by Adriamycin treatment in NM associated proteins. ISPTM increased the MS/MS spectral identification rate substantially, displayed significantly better sensitivity for systematic PTM identification than the conventional all-in-one search approach and offered PTM identification results that were complementary to InsPecT and MODa, both of which are established PTM identification algorithms. In summary, ISPTM is a new and powerful tool for unbiased identification of many different PTMs with high confidence from complex proteome mixtures. PMID:23919725

  11. Calculation of earthquake rupture histories using a hybrid global search algorithm: Application to the 1992 Landers, California, earthquake

    USGS Publications Warehouse

    Hartzell, S.; Liu, P.

    1996-01-01

    A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.

  12. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    PubMed

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process. PMID:26989410

  13. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    PubMed

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  14. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm

    PubMed Central

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K.

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process. PMID:26989410

  15. Algorithms for Regular Tree Grammar Network Search and Their Application to Mining Human-viral Infection Patterns.

    PubMed

    Smoly, Ilan; Carmel, Amir; Shemer-Avni, Yonat; Yeger-Lotem, Esti; Ziv-Ukelson, Michal

    2016-03-01

    Network querying is a powerful approach to mine molecular interaction networks. Most state-of-the-art network querying tools either confine the search to a prespecified topology in the form of some template subnetwork, or do not specify any topological constraints at all. Another approach is grammar-based queries, which are more flexible and expressive as they allow for expressing the topology of the sought pattern according to some grammar-based logic. Previous grammar-based network querying tools were confined to the identification of paths. In this article, we extend the patterns identified by grammar-based query approaches from paths to trees. For this, we adopt a higher order query descriptor in the form of a regular tree grammar (RTG). We introduce a novel problem and propose an algorithm to search a given graph for the k highest scoring subgraphs matching a tree accepted by an RTG. Our algorithm is based on the combination of dynamic programming with color coding, and includes an extension of previous k-best parsing optimization approaches to avoid isomorphic trees in the output. We implement the new algorithm and exemplify its application to mining viral infection patterns within molecular interaction networks. Our code is available online. PMID:26953875

  16. Algorithms for Regular Tree Grammar Network Search and Their Application to Mining Human-viral Infection Patterns.

    PubMed

    Smoly, Ilan; Carmel, Amir; Shemer-Avni, Yonat; Yeger-Lotem, Esti; Ziv-Ukelson, Michal

    2016-03-01

    Network querying is a powerful approach to mine molecular interaction networks. Most state-of-the-art network querying tools either confine the search to a prespecified topology in the form of some template subnetwork, or do not specify any topological constraints at all. Another approach is grammar-based queries, which are more flexible and expressive as they allow for expressing the topology of the sought pattern according to some grammar-based logic. Previous grammar-based network querying tools were confined to the identification of paths. In this article, we extend the patterns identified by grammar-based query approaches from paths to trees. For this, we adopt a higher order query descriptor in the form of a regular tree grammar (RTG). We introduce a novel problem and propose an algorithm to search a given graph for the k highest scoring subgraphs matching a tree accepted by an RTG. Our algorithm is based on the combination of dynamic programming with color coding, and includes an extension of previous k-best parsing optimization approaches to avoid isomorphic trees in the output. We implement the new algorithm and exemplify its application to mining viral infection patterns within molecular interaction networks. Our code is available online.

  17. A Parallel Genetic Algorithm for Automated Electronic Circuit Design

    NASA Technical Reports Server (NTRS)

    Lohn, Jason D.; Colombano, Silvano P.; Haith, Gary L.; Stassinopoulos, Dimitris; Norvig, Peter (Technical Monitor)

    2000-01-01

    We describe a parallel genetic algorithm (GA) that automatically generates circuit designs using evolutionary search. A circuit-construction programming language is introduced and we show how evolution can generate practical analog circuit designs. Our system allows circuit size (number of devices), circuit topology, and device values to be evolved. We present experimental results as applied to analog filter and amplifier design tasks.

  18. A hybrid artificial bee colony algorithm for numerical function optimization

    NASA Astrophysics Data System (ADS)

    Alqattan, Zakaria N.; Abdullah, Rosni

    2015-02-01

    Artificial Bee Colony (ABC) algorithm is one of the swarm intelligence algorithms; it has been introduced by Karaboga in 2005. It is a meta-heuristic optimization search algorithm inspired from the intelligent foraging behavior of the honey bees in nature. Its unique search process made it as one of the most competitive algorithm with some other search algorithms in the area of optimization, such as Genetic algorithm (GA) and Particle Swarm Optimization (PSO). However, the ABC performance of the local search process and the bee movement or the solution improvement equation still has some weaknesses. The ABC is good in avoiding trapping at the local optimum but it spends its time searching around unpromising random selected solutions. Inspired by the PSO, we propose a Hybrid Particle-movement ABC algorithm called HPABC, which adapts the particle movement process to improve the exploration of the original ABC algorithm. Numerical benchmark functions were used in order to experimentally test the HPABC algorithm. The results illustrate that the HPABC algorithm can outperform the ABC algorithm in most of the experiments (75% better in accuracy and over 3 times faster).

  19. Perfect state transfer by means of discrete-time quantum walk search algorithms on highly symmetric graphs

    NASA Astrophysics Data System (ADS)

    Štefaňák, M.; Skoupý, S.

    2016-08-01

    Perfect state transfer between two marked vertices of a graph by means of a discrete-time quantum walk is analyzed. We consider the quantum walk search algorithm with two marked vertices, sender and receiver. It is shown by explicit calculation that, for the coined quantum walks on a star graph and a complete graph with self-loops, perfect state transfer between the sender and receiver vertex is achieved for an arbitrary number of vertices N in O (√{N }) steps of the walk. Finally, we show that Szegedy's walk with queries on a complete graph allows for state transfer with unit fidelity in the limit of large N .

  20. The effect of neighborhood structures on tabu search algorithm in solving university course timetabling problem

    NASA Astrophysics Data System (ADS)

    Shakir, Ali; AL-Khateeb, Belal; Shaker, Khalid; Jalab, Hamid A.

    2014-12-01

    The design of course timetables for academic institutions is a very difficult job due to the huge number of possible feasible timetables with respect to the problem size. This process contains lots of constraints that must be taken into account and a large search space to be explored, even if the size of the problem input is not significantly large. Different heuristic approaches have been proposed in the literature in order to solve this kind of problem. One of the efficient solution methods for this problem is tabu search. Different neighborhood structures based on different types of move have been defined in studies using tabu search. In this paper, different neighborhood structures on the operation of tabu search are examined. The performance of different neighborhood structures is tested over eleven benchmark datasets. The obtained results of every neighborhood structures are compared with each other. Results obtained showed the disparity between each neighborhood structures and another in terms of penalty cost.

  1. Improving Limit Surface Search Algorithms in RAVEN Using Acceleration Schemes: Level II Milestone

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Sen, Ramazan Sonat; Smith, Curtis Lee

    2015-07-01

    The RAVEN code is becoming a comprehensive tool to perform Probabilistic Risk Assessment (PRA); Uncertainty Quantification (UQ) and Propagation; and Verification and Validation (V&V). The RAVEN code is being developed to support the Risk-Informed Safety Margin Characterization (RISMC) pathway by developing an advanced set of methodologies and algorithms for use in advanced risk analysis. The RISMC approach uses system simulator codes applied to stochastic analysis tools. The fundamental idea behind this coupling approach to perturb (by employing sampling strategies) timing and sequencing of events, internal parameters of the system codes (i.e., uncertain parameters of the physics model) and initial conditions to estimate values ranges and associated probabilities of figures of merit of interest for engineering and safety (e.g. core damage probability, etc.). This approach applied to complex systems such as nuclear power plants requires performing a series of computationally expensive simulation runs. The large computational burden is caused by the large set of (uncertain) parameters characterizing those systems. Consequently, exploring the uncertain/parametric domain, with a good level of confidence, is generally not affordable, considering the limited computational resources that are currently available. In addition, the recent tendency to develop newer tools, characterized by higher accuracy and larger computational resources (if compared with the presently used legacy codes, that have been developed decades ago), has made this issue even more compelling. In order to overcome to these limitations, the strategy for the exploration of the uncertain/parametric space needs to use at best the computational resources focusing the computational effort in those regions of the uncertain/parametric space that are “interesting” (e.g., risk-significant regions of the input space) with respect the targeted Figures Of Merit (FOM): for example, the failure of the system

  2. Impurity lattice sites after implantation of Te and Sb in GaAs: Search for the DX centre

    NASA Astrophysics Data System (ADS)

    Zhang, G. L.; Mo, D.; Liang, Z. N.; Niesen, L.

    1990-07-01

    119Sn Mössbauer Spectroscopy has been applied to study the nearest environment of radioactive119mTe and119Sb atoms implanted into GaAs. After a low-dose implantation and annealing above 300°C the impurity atoms are found at As sites. High-dose implantation and annealing above 600°C results in the population of at least two additional sites; these are clearly different for Te and Sb. No evidence is found for the population of DX-centres. A likely possibility is the formation of coherent Ga2Te3 precipitates.

  3. Faster implementation of the hierarchical search algorithm for detection of gravitational waves from inspiraling compact binaries

    NASA Astrophysics Data System (ADS)

    Sengupta, Anand S.; Dhurandhar, Sanjeev; Lazzarini, Albert

    2003-04-01

    The first scientific runs of kilometer scale laser interferometric detectors such as LIGO are under way. Data from these detectors will be used to look for signatures of gravitational waves from astrophysical objects such as inspiraling neutron-star black-hole binaries using matched filtering. The computational resources required for online flat-search implementation of the matched filtering are large if searches are carried out for a small total mass. A flat search is implemented by constructing a single discrete grid of densely populated template waveforms spanning the dynamical parameters—masses, spins—which are correlated with the interferometer data. The correlations over the kinematical parameters can be maximized a prioriwithout constructing a template bank over them. Mohanty and Dhurandhar showed that a significant reduction in computational resources can be accomplished by using a hierarchy of such template banks where candidate events triggered by a sparsely populated grid are followed up by the regular, dense flat-search grid. The estimated speedup in this method was a factor ˜25 over the flat search. In this paper we report an improved implementation of the hierarchical search, wherein we extend the domain of hierarchy to an extra dimension—namely, the time of arrival of the signal in the bandwidth of the interferometer. This is accomplished by lowering the Nyquist sampling rate of the signal in the trigger stage. We show that this leads to further improvement in the efficiency of data analysis and speeds up the online computation by a factor of ˜65 70 over the flat search. We also take into account and discuss issues related to template placement, trigger thresholds, and other peculiar problems that do not arise in earlier implementation schemes of the hierarchical search. We present simulation results for 2PN waveforms embedded in the noise expected for initial LIGO detectors.

  4. Multiple One-Dimensional Search (MODS) algorithm for fast optimization of laser-matter interaction by phase-only fs-laser pulse shaping

    NASA Astrophysics Data System (ADS)

    Galvan-Sosa, M.; Portilla, J.; Hernandez-Rueda, J.; Siegel, J.; Moreno, L.; Solis, J.

    2014-09-01

    In this work, we have developed and implemented a powerful search strategy for optimization of nonlinear optical effects by means of femtosecond pulse shaping, based on topological concepts derived from quantum control theory. Our algorithm [Multiple One-Dimensional Search (MODS)] is based on deterministic optimization of a single solution rather than pseudo-random optimization of entire populations as done by commonly used evolutionary algorithms. We have tested MODS against a genetic algorithm in a nontrivial problem consisting in optimizing the Kerr gating signal (self-interaction) of a shaped laser pulse in a detuned Michelson interferometer configuration. The obtained results show that our search method (MODS) strongly outperforms the genetic algorithm in terms of both convergence speed and quality of the solution. These findings demonstrate the applicability of concepts of quantum control theory to nonlinear laser-matter interaction problems, even in the presence of significant experimental noise.

  5. Reprint of "pFind-Alioth: A novel unrestricted database search algorithm to improve the interpretation of high-resolution MS/MS data".

    PubMed

    Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min

    2015-11-01

    Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested.This article is part of a Special Issue entitled: Computational Proteomics.

  6. pFind-Alioth: A novel unrestricted database search algorithm to improve the interpretation of high-resolution MS/MS data.

    PubMed

    Chi, Hao; He, Kun; Yang, Bing; Chen, Zhen; Sun, Rui-Xiang; Fan, Sheng-Bo; Zhang, Kun; Liu, Chao; Yuan, Zuo-Fei; Wang, Quan-Hui; Liu, Si-Qi; Dong, Meng-Qiu; He, Si-Min

    2015-07-01

    Database search is the dominant approach in high-throughput proteomic analysis. However, the interpretation rate of MS/MS spectra is very low in such a restricted mode, which is mainly due to unexpected modifications and irregular digestion types. In this study, we developed a new algorithm called Alioth, to be integrated into the search engine of pFind, for fast and accurate unrestricted database search on high-resolution MS/MS data. An ion index is constructed for both peptide precursors and fragment ions, by which arbitrary digestions and a single site of any modifications and mutations can be searched efficiently. A new re-ranking algorithm is used to distinguish the correct peptide-spectrum matches from random ones. The algorithm is tested on several HCD datasets and the interpretation rate of MS/MS spectra using Alioth is as high as 60%-80%. Peptides from semi- and non-specific digestions, as well as those with unexpected modifications or mutations, can be effectively identified using Alioth and confidently validated using other search engines. The average processing speed of Alioth is 5-10 times faster than some other unrestricted search engines and is comparable to or even faster than the restricted search algorithms tested.

  7. A case study on optimization of biomass flow during single screw extrusion cooking using Genetic Algorithm (GA) and Response Surface Method (RSM)

    SciTech Connect

    Tumuluru, J.S.; Sokhansanj, Shahabaddine

    2008-12-01

    Abstract In the present study, response surface method (RSM) and genetic algorithm (GA) were used to study the effects of process variables like screw speed, rpm (x1), L/D ratio (x2), barrel temperature ( C; x3), and feed mix moisture content (%; x4), on flow rate of biomass during single-screw extrusion cooking. A second-order regression equation was developed for flow rate in terms of the process variables. The significance of the process variables based on Pareto chart indicated that screw speed and feed mix moisture content had the most influence followed by L/D ratio and barrel temperature on the flow rate. RSM analysis indicated that a screw speed>80 rpm, L/D ratio> 12, barrel temperature>80 C, and feed mix moisture content>20% resulted in maximum flow rate. Increase in screw speed and L/D ratio increased the drag flow and also the path of traverse of the feed mix inside the extruder resulting in more shear. The presence of lipids of about 35% in the biomass feed mix might have induced a lubrication effect and has significantly influenced the flow rate. The second-order regression equations were further used as the objective function for optimization using genetic algorithm. A population of 100 and iterations of 100 have successfully led to convergence the optimum. The maximum and minimum flow rates obtained using GA were 13.19 10 7 m3/s (x1=139.08 rpm, x2=15.90, x3=99.56 C, and x4=59.72%) and 0.53 10 7 m3/s (x1=59.65 rpm, x2= 11.93, x3=68.98 C, and x4=20.04%).

  8. Multi-Objective Random Search Algorithm for Simultaneously Optimizing Wind Farm Layout and Number of Turbines

    NASA Astrophysics Data System (ADS)

    Feng, Ju; Shen, Wen Zhong; Xu, Chang

    2016-09-01

    A new algorithm for multi-objective wind farm layout optimization is presented. It formulates the wind turbine locations as continuous variables and is capable of optimizing the number of turbines and their locations in the wind farm simultaneously. Two objectives are considered. One is to maximize the total power production, which is calculated by considering the wake effects using the Jensen wake model combined with the local wind distribution. The other is to minimize the total electrical cable length. This length is assumed to be the total length of the minimal spanning tree that connects all turbines and is calculated by using Prim's algorithm. Constraints on wind farm boundary and wind turbine proximity are also considered. An ideal test case shows the proposed algorithm largely outperforms a famous multi-objective genetic algorithm (NSGA-II). In the real test case based on the Horn Rev 1 wind farm, the algorithm also obtains useful Pareto frontiers and provides a wide range of Pareto optimal layouts with different numbers of turbines for a real-life wind farm developer.

  9. Using genetic algorithms for solving heavy-atom sites.

    PubMed

    Chang, G; Lewis, M

    1994-09-01

    A novel procedure has been developed for locating heavy-atom positions in crystals of macromolecules. This method used genetic algorithms (GA's) to search for heavy-atom sites that are consistent with an observed difference Patterson function. The procedure is straightforward to apply, space-group independent, and particularly powerful for cases involving non-crystallographic symmetry of multiple heavy atoms in the asymmetric unit. In this paper, we introduce how GA's are used for determining the heavy-atom positions and show how this method is more efficient than a sequential search. PMID:15299364

  10. The Search for Effective Algorithms for Recovery from Loss of Separation

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Hagen, George E.; Maddalon, Jeffrey M.; Munoz, Cesar A.; Narawicz, Anthony J.

    2012-01-01

    Our previous work presented an approach for developing high confidence algorithms for recovering aircraft from loss of separation situations. The correctness theorems for the algorithms relied on several key assumptions, namely that state data for all local aircraft is perfectly known, that resolution maneuvers can be achieved instantaneously, and that all aircraft compute resolutions using exactly the same data. Experiments showed that these assumptions were adequate in cases where the aircraft are far away from losing separation, but are insufficient when the aircraft have already lost separation. This paper describes the results of this experimentation and proposes a new criteria specification for loss of separation recovery that preserves the formal safety properties of the previous criteria while overcoming some key limitations. Candidate algorithms that satisfy the new criteria are presented.

  11. A hybrid algorithm for clustering of time series data based on affinity search technique.

    PubMed

    Aghabozorgi, Saeed; Ying Wah, Teh; Herawan, Tutut; Jalab, Hamid A; Shaygan, Mohammad Amin; Jalali, Alireza

    2014-01-01

    Time series clustering is an important solution to various problems in numerous fields of research, including business, medical science, and finance. However, conventional clustering algorithms are not practical for time series data because they are essentially designed for static data. This impracticality results in poor clustering accuracy in several systems. In this paper, a new hybrid clustering algorithm is proposed based on the similarity in shape of time series data. Time series data are first grouped as subclusters based on similarity in time. The subclusters are then merged using the k-Medoids algorithm based on similarity in shape. This model has two contributions: (1) it is more accurate than other conventional and hybrid approaches and (2) it determines the similarity in shape among time series data with a low complexity. To evaluate the accuracy of the proposed model, the model is tested extensively using syntactic and real-world time series datasets.

  12. A genetic algorithm based global search strategy for population pharmacokinetic/pharmacodynamic model selection

    PubMed Central

    Sale, Mark; Sherer, Eric A

    2015-01-01

    The current algorithm for selecting a population pharmacokinetic/pharmacodynamic model is based on the well-established forward addition/backward elimination method. A central strength of this approach is the opportunity for a modeller to continuously examine the data and postulate new hypotheses to explain observed biases. This algorithm has served the modelling community well, but the model selection process has essentially remained unchanged for the last 30 years. During this time, more robust approaches to model selection have been made feasible by new technology and dramatic increases in computation speed. We review these methods, with emphasis on genetic algorithm approaches and discuss the role these methods may play in population pharmacokinetic/pharmacodynamic model selection. PMID:23772792

  13. Natural search algorithms as a bridge between organisms, evolution, and ecology.

    PubMed

    Hein, Andrew M; Carrara, Francesco; Brumley, Douglas R; Stocker, Roman; Levin, Simon A

    2016-08-23

    The ability to navigate is a hallmark of living systems, from single cells to higher animals. Searching for targets, such as food or mates in particular, is one of the fundamental navigational tasks many organisms must execute to survive and reproduce. Here, we argue that a recent surge of studies of the proximate mechanisms that underlie search behavior offers a new opportunity to integrate the biophysics and neuroscience of sensory systems with ecological and evolutionary processes, closing a feedback loop that promises exciting new avenues of scientific exploration at the frontier of systems biology. PMID:27496324

  14. Adaptive Process Control with Fuzzy Logic and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision-making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, an analysis element to recognize changes in the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  15. Adaptive process control using fuzzy logic and genetic algorithms

    NASA Technical Reports Server (NTRS)

    Karr, C. L.

    1993-01-01

    Researchers at the U.S. Bureau of Mines have developed adaptive process control systems in which genetic algorithms (GA's) are used to augment fuzzy logic controllers (FLC's). GA's are search algorithms that rapidly locate near-optimum solutions to a wide spectrum of problems by modeling the search procedures of natural genetics. FLC's are rule based systems that efficiently manipulate a problem environment by modeling the 'rule-of-thumb' strategy used in human decision making. Together, GA's and FLC's possess the capabilities necessary to produce powerful, efficient, and robust adaptive control systems. To perform efficiently, such control systems require a control element to manipulate the problem environment, and a learning element to adjust to the changes in the problem environment. Details of an overall adaptive control system are discussed. A specific laboratory acid-base pH system is used to demonstrate the ideas presented.

  16. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  17. A UAV routing and sensor control optimization algorithm for target search

    NASA Astrophysics Data System (ADS)

    Collins, Gaemus E.; Riehl, James R.; Vegdahl, Philip S.

    2007-04-01

    An important problem in unmanned air vehicle (UAV) and UAV-mounted sensor control is the target search problem: locating target(s) in minimum time. Current methods solve the optimization of UAV routing control and sensor management independently. While this decoupled approach makes the target search problem computationally tractable, it is suboptimal. In this paper, we explore the target search and classification problems by formulating and solving a joint UAV routing and sensor control optimization problem. The routing problem is solved on a graph using receding horizon optimal control. The graph is dynamically adjusted based on the target probability distribution function (PDF). The objective function for the routing optimization is the solution of a sensor control optimization problem. An optimal sensor schedule (in the sense of maximizing the viewed target probability mass) is constructed for each candidate flight path in the routing control problem. The PDF of the target state is represented with a particle filter and an "occupancy map" for any undiscovered targets. The tradeoff between searching for undiscovered targets and locating tracks is handled automatically and dynamically by the use of an appropriate objective function. In particular, the objective function is based on the expected amount of target probability mass to be viewed.

  18. Algorithms for recollection of search terms based on the Wikipedia category structure.

    PubMed

    Vandamme, Stijn; De Turck, Filip

    2014-01-01

    The common user interface for a search engine consists of a text field where the user can enter queries consisting of one or more keywords. Keyword query based search engines work well when the users have a clear vision what they are looking for and are capable of articulating their query using the same terms as indexed. For our multimedia database containing 202,868 items with text descriptions, we supplement such a search engine with a category-based interface whose category structure is tailored to the content of the database. This facilitates browsing and offers the users the possibility to look for named entities, even if they forgot their names. We demonstrate that this approach allows users who fail to recollect the name of named entities to retrieve data with little effort. In all our experiments, it takes 1 query on a category and on average 2.49 clicks, compared to 5.68 queries on the database's traditional text search engine for a 68.3% success probability or 6.01 queries when the user also turns to Google, for a 97.1% success probability.

  19. Hybrid self organizing migrating algorithm - Scatter search for the task of capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Davendra, Donald; Zelinka, Ivan; Senkerik, Roman; Jasek, Roman; Bialic-Davendra, Magdalena

    2012-11-01

    One of the new emerging application strategies for optimization is the hybridization of existing metaheuristics. The research combines the unique paradigms of solution space sampling of SOMA and memory retention capabilities of Scatter Search for the task of capacitated vehicle routing problem. The new hybrid heuristic is tested on the Taillard sets and obtains good results.

  20. Methodological aspects of an adaptive multidirectional pattern search to optimize speech perception using three hearing-aid algorithms

    NASA Astrophysics Data System (ADS)

    Franck, Bas A. M.; Dreschler, Wouter A.; Lyzenga, Johannes

    2004-12-01

    In this study we investigated the reliability and convergence characteristics of an adaptive multidirectional pattern search procedure, relative to a nonadaptive multidirectional pattern search procedure. The procedure was designed to optimize three speech-processing strategies. These comprise noise reduction, spectral enhancement, and spectral lift. The search is based on a paired-comparison paradigm, in which subjects evaluated the listening comfort of speech-in-noise fragments. The procedural and nonprocedural factors that influence the reliability and convergence of the procedure are studied using various test conditions. The test conditions combine different tests, initial settings, background noise types, and step size configurations. Seven normal hearing subjects participated in this study. The results indicate that the reliability of the optimization strategy may benefit from the use of an adaptive step size. Decreasing the step size increases accuracy, while increasing the step size can be beneficial to create clear perceptual differences in the comparisons. The reliability also depends on starting point, stop criterion, step size constraints, background noise, algorithms used, as well as the presence of drifting cues and suboptimal settings. There appears to be a trade-off between reliability and convergence, i.e., when the step size is enlarged the reliability improves, but the convergence deteriorates. .

  1. Using a hybrid Monte Carlo/ Slip Estimator-Genetic Algorithm (MCSE-GA) to produce high resolution estimates of paleoearthquakes from geodetic data

    NASA Astrophysics Data System (ADS)

    Lindsay, Anthony; McCloskey, John; Simão, Nuno; Murphy, Shane; Bhloscaidh, Mairead Nic

    2014-05-01

    Identifying fault sections where slip deficits have accumulated may provide a means for understanding sequences of large megathrust earthquakes. Stress accumulated during the interseismic period on an active megathrust is stored as potential slip, referred to as slip deficit, along locked sections of the fault. Analysis of the spatial distribution of slip during antecedent events along the fault will show where the locked plate has spent its stored slip. Areas of unreleased slip indicate where the potential for large events remain. The location of recent earthquakes and their distribution of slip can be estimated from instrumentally recorded seismic and geodetic data. However, long-term slip-deficit modelling requires detailed information on the size and distribution of slip for pre-instrumental events over hundreds of years covering more than one 'seismic cycle'. This requires the exploitation of proxy sources of data. Coral microatolls, growing in the intertidal zone of the outer island arc of the Sunda trench, present the possibility of reconstructing slip for a number of pre-instrumental earthquakes. Their growth is influenced by tectonic flexing of the continental plate beneath them; they act as long term recorders of the vertical component of deformation. However, the sparse distribution of data available using coral geodesy results in a under determined problem with non-unique solutions. Rather than accepting any one realisation as the definite model satisfying the coral displacement data, a Monte Carlo approach identifies a suite of models consistent with the observations. Using a Genetic Algorithm to accelerate the identification of desirable models, we have developed a Monte Carlo Slip Estimator- Genetic Algorithm (MCSE-GA) which exploits the full range of uncertainty associated with the displacements. Each iteration of the MCSE-GA samples different values from within the spread of uncertainties associated with each coral displacement. The Genetic

  2. A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Ortiz, Francisco

    2004-01-01

    COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions

  3. Search for ferromagnetic ordering in Pd doped wide band gap semiconductors GaN and ZnO

    NASA Astrophysics Data System (ADS)

    Kessler, P.; Müller, K.; Geruschke, T.; Timmers, H.; Byrne, A. P.; Vianden, R.

    2010-04-01

    GaN and ZnO are possible candidates for dilute magnetic semiconductors with Curie temperatures above room temperature. Doping with transition metals like Co, Mn or Fe could be a simple way to create such systems. The perturbed angular correlation (PAC) probe 100Pd/100Rh is isoelectronic to cobalt and therefore a perfect tool to investigate the incorporation of transition metals into these compounds as well as the influence of other impurities on internal magnetic fields. The (0001) and (10bar{1}10) surfaces of ZnO single crystals, freestanding GaN films, and GaN thin films (6 μm) on sapphire substrates were recoil-implanted with the 100Pd/100Rh probe. The probe was produced using the fusion evaporation reaction 92Zr(12C, 4n)100Pd at a beam energy of 69 MeV. Subsequently, the incorporation of the probe was studied by PAC spectroscopy during an isochronal annealing program. First results without and with an applied external magnetic field are indicative of a strongly disturbed lattice vicinity of Pd impurities in both hosts. No signs of spontaneous ferromagnetic ordering were observed.

  4. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  5. An Efficient VLSI Architecture of the Enhanced Three Step Search Algorithm

    NASA Astrophysics Data System (ADS)

    Biswas, Baishik; Mukherjee, Rohan; Saha, Priyabrata; Chakrabarti, Indrajit

    2016-09-01

    The intense computational complexity of any video codec is largely due to the motion estimation unit. The Enhanced Three Step Search is a popular technique that can be adopted for fast motion estimation. This paper proposes a novel VLSI architecture for the implementation of the Enhanced Three Step Search Technique. A new addressing mechanism has been introduced which enhances the speed of operation and reduces the area requirements. The proposed architecture when implemented in Verilog HDL on Virtex-5 Technology and synthesized using Xilinx ISE Design Suite 14.1 achieves a critical path delay of 4.8 ns while the area comes out to be 2.9K gate equivalent. It can be incorporated in commercial devices like smart-phones, camcorders, video conferencing systems etc.

  6. Search for spinning black hole binaries in mock LISA data using a genetic algorithm

    SciTech Connect

    Petiteau, Antoine; Shang Yu; Babak, Stanislav; Feroz, Farhan

    2010-05-15

    Coalescing massive black hole binaries are the strongest and probably the most important gravitational wave sources in the LISA band. The spin and orbital precessions bring complexity in the waveform and make the likelihood surface richer in structure as compared to the nonspinning case. We introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge (MLDC3.2). The performance of this method is comparable, if not better, to already existing algorithms. We have found all five sources present in MLDC3.2 and recovered the coalescence time, chirp mass, mass ratio, and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the black holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values.

  7. Optimal design of groundwater remediation systems using a probabilistic multi-objective fast harmony search algorithm under uncertainty

    NASA Astrophysics Data System (ADS)

    Luo, Q.; Wu, J.; Qian, J.

    2013-12-01

    This study develops a new probabilistic multi-objective fast harmony search algorithm (PMOFHS) for optimal design of groundwater remediation system under uncertainty associated with the hydraulic conductivity of aquifers. The PMOFHS integrates the previously developed deterministic multi-objective optimization method, namely multi-objective fast harmony search algorithm (MOFHS) with a probabilistic Pareto domination ranking and probabilistic niche technique to search for Pareto-optimal solutions to multi-objective optimization problems in a noisy hydrogeological environment arising from insufficient hydraulic conductivity data. The PMOFHS is then coupled with the commonly used flow and transport codes, MODFLOW and MT3DMS, to identify the optimal groundwater remediation system of a two-dimensional hypothetical test problem involving two objectives: (i) minimization of the total remediation cost through the engineering planning horizon, and (ii) minimization of the percentage of mass remaining in the aquifer at the end of the operational period, which uses the Pump-and-Treat (PAT) technology to clean up contaminated groundwater. Also, Monte Carlo (MC) analysis is used to demonstrate the effectiveness of the proposed methodology. The MC analysis is taken to each Pareto solutions for every K realization. Then the statistical mean and the upper and lower bounds of uncertainty intervals of 95% confidence level are calculated. The MC analysis results show that all of the Pareto-optimal solutions are located between the upper and lower bounds of the MC analysis. Moreover, the root mean square errors (RMSEs) between the Pareto-optimal solutions by the PMOFHS and the average values of optimal solutions by the MC analysis are 0.0204 for the first objective and 0.0318 for the second objective, quite smaller than those RMSEs between the results by the existing probabilistic multi-objective genetic algorithm (PMOGA) and the MC analysis, 0.0384 and 0.0397, respectively. In

  8. Search for an optimized cyclic charging algorithm for valve-regulated lead-acid batteries

    NASA Astrophysics Data System (ADS)

    Nelson, R. F.; Sexton, E. D.; Olson, J. B.; Keyser, M.; Pesaran, A.

    Valve-regulated lead-acid (VRLA) batteries are characterized by relatively poor performance in cyclic applications of the order of two hundred to three hundred 100% depth-of-discharge (DoD) cycles. Failure is due to sulfation of the negative plate and softening of the positive active-material. It is felt that this failure mode arises from abnormally high levels of oxygen recombination that arise due to decreases in separator saturation levels as VRLA batteries age. Charging algorithms have been developed to address this changing condition throughout life. The key step is the finish of charge where, traditionally, low currents and low overcharge limits have been employed with poor results. It has been found that using high finishing currents in an alternating charge-rest algorithm results in proper recharge of the negative plate without creating unacceptable temperature increases. This has resulted in deep-discharge lifetimes of 800 to 1000 cycles, particularly when using a charging algorithm employing only partial recharges (97-100% return) interspersed with full conditioning recharges every 10th cycle. With such minimal average overcharge levels, deep-cycle lifetimes approaching 1000 cycles have been achieved without experiencing failure due to massive grid corrosion.

  9. Ultimately accurate SRAF replacement for practical phases using an adaptive search algorithm based on the optimal gradient method

    NASA Astrophysics Data System (ADS)

    Maeda, Shimon; Nosato, Hirokazu; Matsunawa, Tetsuaki; Miyairi, Masahiro; Nojima, Shigeki; Tanaka, Satoshi; Sakanashi, Hidenori; Murakawa, Masahiro; Saito, Tamaki; Higuchi, Tetsuya; Inoue, Soichi

    2010-04-01

    SRAF (Sub Resolution Assist Feature) technique has been widely used for DOF enhancement. Below 40nm design node, even in the case of using the SRAF technique, the resolution limit is approached due to the use of hyper NA imaging or low k1 lithography conditions especially for the contact layer. As a result, complex layout patterns or random patterns like logic data or intermediate pitch patterns become increasingly sensitive to photo-resist pattern fidelity. This means that the need for more accurate resolution technique is increasing in order to cope with lithographic patterning fidelity issues in low k1 lithography conditions. To face with these issues, new SRAF technique like model based SRAF using an interference map or inverse lithography technique has been proposed. But these approaches don't have enough assurance for accuracy or performance, because the ideal mask generated by these techniques is lost when switching to a manufacturable mask with Manhattan structures. As a result it might be very hard to put these things into practice and production flow. In this paper, we propose the novel method for extremely accurate SRAF placement using an adaptive search algorithm. In this method, the initial position of SRAF is generated by the traditional SRAF placement such as rule based SRAF, and it is adjusted by adaptive algorithm using the evaluation of lithography simulation. This method has three advantages which are preciseness, efficiency and industrial applicability. That is, firstly, the lithography simulation uses actual computational model considering process window, thus our proposed method can precisely adjust the SRAF positions, and consequently we can acquire the best SRAF positions. Secondly, because our adaptive algorithm is based on optimal gradient method, which is very simple algorithm and rectilinear search, the SRAF positions can be adjusted with high efficiency. Thirdly, our proposed method, which utilizes the traditional SRAF placement, is

  10. Interpreting the cross-sectional flow field in a river bank based on a genetic-algorithm two-dimensional heat-transport method (GA-VS2DH)

    NASA Astrophysics Data System (ADS)

    Su, Xiaoru; Shu, Longcang; Chen, Xunhong; Lu, Chengpeng; Wen, Zhonghui

    2016-08-01

    Interactions between surface waters and groundwater are of great significance for evaluating water resources and protecting ecosystem health. Heat as a tracer method is widely used in determination of the interactive exchange with high precision, low cost and great convenience. The flow in a river-bank cross-section occurs in vertical and lateral directions. In order to depict the flow path and its spatial distribution in bank areas, a genetic algorithm (GA) two-dimensional (2-D) heat-transport nested-loop method for variably saturated sediments, GA-VS2DH, was developed based on Microsoft Visual Basic 6.0. VS2DH was applied to model a 2-D bank-water flow field and GA was used to calibrate the model automatically by minimizing the difference between observed and simulated temperatures in bank areas. A hypothetical model was developed to assess the reliability of GA-VS2DH in inverse modeling in a river-bank system. Some benchmark tests were conducted to recognize the capability of GA-VS2DH. The results indicated that the simulated seepage velocity and parameters associated with GA-VS2DH were acceptable and reliable. Then GA-VS2DH was applied to two field sites in China with different sedimentary materials, to verify the reliability of the method. GA-VS2DH could be applied in interpreting the cross-sectional 2-D water flow field. The estimates of horizontal hydraulic conductivity at the Dawen River and Qinhuai River sites are 1.317 and 0.015 m/day, which correspond to sand and clay sediment in the two sites, respectively.

  11. A Search Algorithm for Determination of Economic Order Quantity in a Two-Level Supply Chain System with Transportation Cost

    NASA Astrophysics Data System (ADS)

    Pirayesh Neghab, Mohammadali; Haji, Rasoul

    This study considers a two-level supply chain system consisting of one warehouse and a number of identical retailers. In this system, we incorporate transportation costs into inventory replenishment decisions. The transportation cost contains a fixed cost and a variable cost. We assume that the demand rate at each retailer is known and the demand is confined to a single item. First, we derive the total cost which is the sum of the holding and ordering cost at the warehouse and retailers as well as the transportation cost from the warehouse to retailers. Then, we propose a search algorithm to find the economic order quantities for the warehouse and retailers which minimize the total cost.

  12. Developing a Direct Search Algorithm for Solving the Capacitated Open Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Simbolon, Hotman

    2011-06-01

    In open vehicle routing problems, the vehicles are not required to return to the depot after completing service. In this paper, we present the first exact optimization algorithm for the open version of the well-known capacitated vehicle routing problem (CVRP). The strategy of releasing nonbasic variables from their bounds, combined with the "active constraint" method and the notion of superbasics, has been developed for efficiently requirements; this strategy is used to force the appropriate non-integer basic variables to move to their neighborhood integer points. A study of criteria for choosing a nonbasic variable to work with in the integerizing strategy has also been made.

  13. Searching for the Optimal Working Point of the MEIC at JLab Using an Evolutionary Algorithm

    SciTech Connect

    Balsa Terzic, Matthew Kramer, Colin Jarvis

    2011-03-01

    The Medium-energy Electron Ion Collider (MEIC), a proposed medium-energy ring-ring electron-ion collider based on CEBAF at Jefferson Lab. The collider luminosity and stability are sensitive to the choice of a working point - the betatron and synchrotron tunes of the two colliding beams. Therefore, a careful selection of the working point is essential for stable operation of the collider, as well as for achieving high luminosity. Here we describe a novel approach for locating an optimal working point based on evolutionary algorithm techniques.

  14. A genetic algorithm based method for docking flexible molecules

    SciTech Connect

    Judson, R.S.; Jaeger, E.P.; Treasurywala, A.M.

    1993-11-01

    The authors describe a computational method for docking flexible molecules into protein binding sites. The method uses a genetic algorithm (GA) to search the combined conformation/orientation space of the molecule to find low energy conformation. Several techniques are described that increase the efficiency of the basic search method. These include the use of several interacting GA subpopulations or niches; the use of a growing algorithm that initially docks only a small part of the molecule; and the use of gradient minimization during the search. To illustrate the method, they dock Cbz-GlyP-Leu-Leu (ZGLL) into thermolysin. This system was chosen because a well refined crystal structure is available and because another docking method had previously been tested on this system. Their method is able to find conformations that lie physically close to and in some cases lower in energy than the crystal conformation in reasonable periods of time on readily available hardware.

  15. Genetic algorithm parameter optimization: applied to sensor coverage

    NASA Astrophysics Data System (ADS)

    Sahin, Ferat; Abbate, Giuseppe

    2004-08-01

    Genetic Algorithms are powerful tools, which when set upon a solution space will search for the optimal answer. These algorithms though have some associated problems, which are inherent to the method such as pre-mature convergence and lack of population diversity. These problems can be controlled with changes to certain parameters such as crossover, selection, and mutation. This paper attempts to tackle these problems in GA by having another GA controlling these parameters. The values for crossover parameter are: one point, two point, and uniform. The values for selection parameters are: best, worst, roulette wheel, inside 50%, outside 50%. The values for the mutation parameter are: random and swap. The system will include a control GA whose population will consist of different parameters settings. While this GA is attempting to find the best parameters it will be advancing into the search space of the problem and refining the population. As the population changes due to the search so will the optimal parameters. For every control GA generation each of the individuals in the population will be tested for fitness by being run through the problem GA with the assigned parameters. During these runs the population used in the next control generation is compiled. Thus, both the issue of finding the best parameters and the solution to the problem are attacked at the same time. The goal is to optimize the sensor coverage in a square field. The test case used was a 30 by 30 unit field with 100 sensor nodes. Each sensor node had a coverage area of 3 by 3 units. The algorithm attempts to optimize the sensor coverage in the field by moving the nodes. The results show that the control GA will provide better results when compared to a system with no parameter changes.

  16. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  17. A supersecondary structure library and search algorithm for modeling loops in protein structures

    PubMed Central

    Fernandez-Fuentes, Narcis; Oliva, Baldomero; Fiser, András

    2006-01-01

    We present a fragment-search based method for predicting loop conformations in protein models. A hierarchical and multidimensional database has been set up that currently classifies 105 950 loop fragments and loop flanking secondary structures. Besides the length of the loops and types of bracing secondary structures the database is organized along four internal coordinates, a distance and three types of angles characterizing the geometry of stem regions. Candidate fragments are selected from this library by matching the length, the types of bracing secondary structures of the query and satisfying the geometrical restraints of the stems and subsequently inserted in the query protein framework where their fit is assessed by the root mean square deviation (r.m.s.d.) of stem regions and by the number of rigid body clashes with the environment. In the final step remaining candidate loops are ranked by a Z-score that combines information on sequence similarity and fit of predicted and observed ϕ/ψ main chain dihedral angle propensities. Confidence Z-score cut-offs were determined for each loop length that identify those predicted fragments that outperform a competitive ab initio method. A web server implements the method, regularly updates the fragment library and performs prediction. Predicted segments are returned, or optionally, these can be completed with side chain reconstruction and subsequently annealed in the environment of the query protein by conjugate gradient minimization. The prediction method was tested on artificially prepared search datasets where all trivial sequence similarities on the SCOP superfamily level were removed. Under these conditions it is possible to predict loops of length 4, 8 and 12 with coverage of 98, 78 and 28% with at least of 0.22, 1.38 and 2.47 Å of r.m.s.d. accuracy, respectively. In a head-to-head comparison on loops extracted from freshly deposited new protein folds the current method outperformed in a ∼5:1 ratio an

  18. A supersecondary structure library and search algorithm for modeling loops in protein structures.

    PubMed

    Fernandez-Fuentes, Narcis; Oliva, Baldomero; Fiser, András

    2006-01-01

    We present a fragment-search based method for predicting loop conformations in protein models. A hierarchical and multidimensional database has been set up that currently classifies 105,950 loop fragments and loop flanking secondary structures. Besides the length of the loops and types of bracing secondary structures the database is organized along four internal coordinates, a distance and three types of angles characterizing the geometry of stem regions. Candidate fragments are selected from this library by matching the length, the types of bracing secondary structures of the query and satisfying the geometrical restraints of the stems and subsequently inserted in the query protein framework where their fit is assessed by the root mean square deviation (r.m.s.d.) of stem regions and by the number of rigid body clashes with the environment. In the final step remaining candidate loops are ranked by a Z-score that combines information on sequence similarity and fit of predicted and observed phi/psi main chain dihedral angle propensities. Confidence Z-score cut-offs were determined for each loop length that identify those predicted fragments that outperform a competitive ab initio method. A web server implements the method, regularly updates the fragment library and performs prediction. Predicted segments are returned, or optionally, these can be completed with side chain reconstruction and subsequently annealed in the environment of the query protein by conjugate gradient minimization. The prediction method was tested on artificially prepared search datasets where all trivial sequence similarities on the SCOP superfamily level were removed. Under these conditions it is possible to predict loops of length 4, 8 and 12 with coverage of 98, 78 and 28% with at least of 0.22, 1.38 and 2.47 A of r.m.s.d. accuracy, respectively. In a head-to-head comparison on loops extracted from freshly deposited new protein folds the current method outperformed in a approximately 5

  19. Optimization of the collimation system for CSNS/RCS with the robust conjugate direction search algorithm

    NASA Astrophysics Data System (ADS)

    Ji, Hong-Fei; Jiao, Yi; Huang, Ming-Yang; Xu, Shou-Yan; Wang, Na; Wang, Sheng

    2016-09-01

    The Robust Conjugate Direction Search (RCDS) method is used to optimize the collimation system for the Rapid Cycling Synchrotron (RCS) of the China Spallation Neutron Source (CSNS). The parameters of secondary collimators are optimized for a better performance of the collimation system. To improve the efficiency of the optimization, the Objective Ring Beam Injection and Tracking (ORBIT) parallel module combined with MATLAB parallel computing is used, which can run multiple ORBIT instances simultaneously. This study presents a way to find an optimal parameter combination of the secondary collimators for a machine model in preparation for CSNS/RCS commissioning. Supported by National Natural Science Foundation of China (11475202, 11405187, 11205185) and Youth Innovation Promotion Association of Chinese Academy of Sciences (2015009)

  20. Application of artificial bee colony (ABC) algorithm in search of optimal release of Aswan High Dam

    NASA Astrophysics Data System (ADS)

    Hossain, Md S.; El-shafie, A.

    2013-04-01

    The paper presents a study on developing an optimum reservoir release policy by using ABC algorithm. The decision maker of a reservoir system always needs a guideline to operate the reservoir in an optimal way. Release curves have developed for high, medium and low inflow category that can answer how much water need to be release for a month by observing the reservoir level (storage condition). The Aswan high dam of Egypt has considered as the case study. 18 years of historical inflow data has used for simulation purpose and the general system performance measuring indices has measured. The application procedure and problem formulation of ABC is very simple and can be used in optimizing reservoir system. After using the actual historical inflow, the release policy succeeded in meeting demand for about 98% of total time period.

  1. Design Principles of Regulatory Networks: Searching for the Molecular Algorithms of the Cell

    PubMed Central

    Lim, Wendell A.; Lee, Connie M.; Tang, Chao

    2013-01-01

    A challenge in biology is to understand how complex molecular networks in the cell execute sophisticated regulatory functions. Here we explore the idea that there are common and general principles that link network structures to biological functions, principles that constrain the design solutions that evolution can converge upon for accomplishing a given cellular task. We describe approaches for classifying networks based on abstract architectures and functions, rather than on the specific molecular components of the networks. For any common regulatory task, can we define the space of all possible molecular solutions? Such inverse approaches might ultimately allow the assembly of a design table of core molecular algorithms that could serve as a guide for building synthetic networks and modulating disease networks. PMID:23352241

  2. Software Trigger Algorithms to Search for Magnetic Monopoles with the NO$\

    SciTech Connect

    Wang, Z.; Dukes, E.; Ehrlich, R.; Frank, M.; Group, C.; Norman, A.

    2014-01-01

    The NOvA far detector, due to its surface proximity, large size, good timing resolution, large energy dynamic range, and continuous readout, is sensitive to the detection of magnetic monopoles over a large range of velocities and masses. In order to record candidate magnetic monopole events with high efficiency we have designed a software-based trigger to make decisions based on the data recorded by the detector. The decisions must be fast, have high efficiency, and a large rejection factor for the over 100,000 cosmic rays that course through the detector every second. In this paper we briefly describe the simulation of magnetic monopoles, including the detector response, and then discuss the algorithms applied to identify magnetic monopole candidates. We also present the results of trigger efficiency and purity tests using simulated samples of magnetic monopoles with overlaid cosmic backgrounds and electronic noise.

  3. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm.

    PubMed

    Wang, Jie-Sheng; Han, Shuang

    2015-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process.

  4. Feed-Forward Neural Network Soft-Sensor Modeling of Flotation Process Based on Particle Swarm Optimization and Gravitational Search Algorithm

    PubMed Central

    Wang, Jie-Sheng; Han, Shuang

    2015-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034

  5. Gravitation field algorithm and its application in gene cluster

    PubMed Central

    2010-01-01

    Background Searching optima is one of the most challenging tasks in clustering genes from available experimental data or given functions. SA, GA, PSO and other similar efficient global optimization methods are used by biotechnologists. All these algorithms are based on the imitation of natural phenomena. Results This paper proposes a novel searching optimization algorithm called Gravitation Field Algorithm (GFA) which is derived from the famous astronomy theory Solar Nebular Disk Model (SNDM) of planetary formation. GFA simulates the Gravitation field and outperforms GA and SA in some multimodal functions optimization problem. And GFA also can be used in the forms of unimodal functions. GFA clusters the dataset well from the Gene Expression Omnibus. Conclusions The mathematical proof demonstrates that GFA could be convergent in the global optimum by probability 1 in three conditions for one independent variable mass functions. In addition to these results, the fundamental optimization concept in this paper is used to analyze how SA and GA affect the global search and the inherent defects in SA and GA. Some results and source code (in Matlab) are publicly available at http://ccst.jlu.edu.cn/CSBG/GFA. PMID:20854683

  6. Ranking inconsistencies in the assessment of digital breast tomosynthesis (DBT) reconstruction algorithms using a location-known task and a search task

    NASA Astrophysics Data System (ADS)

    He, Xin; Zeng, Rongping; Samuelson, Frank; Sahiner, Berkman

    2016-03-01

    In this work, we validated a task-based performance figure-of-merit (FOM) by investigating ranking inconsistencies due to lurking variable/factors. We applied a falsifiable search assessment theory to assessing digital breast tomosynthesis (DBT) image quality using a scanning channelized Hotelling observer (CHO) on a simulated DBT dataset. We compared the performance of five reconstruction algorithms: filter back projection (FBP), maximum likelihood (ML), simultaneous algebraic reconstruction technique (SART), total-variation regularized least square estimator (TVLS) with strong and mild regularization settings. The results showed that the location-known-exactly (LKE) detection performance was almost identical for the five reconstruction algorithms. However the search characteristic as described by effective set size (M*) and search AUC value, ranked them differently. To falsify/corroborate our evaluations on search characteristic and performance, we conducted an image-size test. This test demonstrated an agreement between theoretical predictions and empirically measured observer performance in absolute performance levels, except for the ML algorithm. We concluded that evidence corroborated our evaluations, except that for the ML algorithm where our evaluation was wrong. Further investigation of the wrong evaluation in the ML case revealed a lurking variable that affected system performance ranking in search when AUC value was used as the FOM. This further confirmed that our evaluation in its current form for the ML algorithm was indeed wrong. We also noted that the ranking inconsistencies exist even when the AUC value was used as the FOM, and the falsifiable nature of M* allowed such inconsistencies to be identified.

  7. Instrument design and optimization using genetic algorithms

    SciTech Connect

    Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter

    2006-10-15

    This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.

  8. Constraining planetary atmospheric density: application of heuristic search algorithms to aerodynamic modeling of impact ejecta trajectories

    NASA Astrophysics Data System (ADS)

    Liu, Z. Y. C.; Shirzaei, M.

    2015-12-01

    Impact craters on the terrestrial planets are typically surrounded by a continuous ejecta blanket that the initial emplacement is via ballistic sedimentation. Following an impact event, a significant volume of material is ejected and falling debris surrounds the crater. Aerodynamics rule governs the flight path and determines the spatial distribution of these ejecta. Thus, for the planets with atmosphere, the preserved ejecta deposit directly recorded the interaction of ejecta and atmosphere at the time of impact. In this study, we develop a new framework to establish links between distribution of the ejecta, age of the impact and the properties of local atmosphere. Given the radial distance of the continuous ejecta extent from crater, an inverse aerodynamic modeling approach is employed to estimate the local atmospheric drags and density as well as the lift forces at the time of impact. Based on earlier studies, we incorporate reasonable value ranges for ejection angle, initial velocity, aerodynamic drag, and lift in the model. In order to solve the trajectory differential equations, obtain the best estimate of atmospheric density, and the associated uncertainties, genetic algorithm is applied. The method is validated using synthetic data sets as well as detailed maps of impact ejecta associated with five fresh martian and two lunar impact craters, with diameter of 20-50 m, 10-20 m, respectively. The estimated air density for martian carters range 0.014-0.028 kg/m3, consistent with the recent surface atmospheric density measurement of 0.015-0.020 kg/m3. This constancy indicates the robustness of the presented methodology. In the following, the inversion results for the lunar craters yield air density of 0.003-0.008 kg/m3, which suggest the inversion results are accurate to the second decimal place. This framework will be applied to older martian craters with preserved ejecta blankets, which expect to constrain the long-term evolution of martian atmosphere.

  9. From Competence to Efficiency: A Tale of GA Progress

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.

    1996-01-01

    Genetic algorithms (GAs) - search procedures based on the mechanics of natural selection and genetics - have grown in popularity for the solution of difficult optimization problems. Concomitant with this growth has been a rising cacaphony of complaint asserting that too much time must be spent by the GA practitioner diddling with codes, operators, and GA parameters; and even then these GA cassandras continue, and the user is still unsure that the effort will meet with success. At the same time, there has been a rising interest in GA theory by a growing community - a theorocracy - of mathematicians and theoretical computer scientists, and these individuals have turned their efforts increasingly toward elegant abstract theorems and proofs that seem to the practitioner to offer little in the way of answers for GA design or practice. What both groups seem to have missed is the largely unheralded 1993 assembly of integrated, applicable theory and its experimental confirmation. This theory has done two key things. First, it has predicted that simple GAs are severely limited in the difficulty of problems they can solve, and these limitations have been confirmed experimentally. Second, it has shown the path to circumventing these limitations in nontraditional GA designs such as the fast messy GA. This talk surveys the history, methodology, and accomplishment of the 1993 applicable theory revolution. After arguing that these accomplishments open the door to universal GA competence, the paper shifts the discussion to the possibility of universal GA efficiency in the utilization of time and real estate through effective parallelization, temporal decomposition, hybridization, and relaxed function evaluation. The presentation concludes by suggesting that these research directions are quickly taking us to a golden age of adaptation.

  10. An Adaptive Niching Genetic Algorithm using a niche size equalization mechanism

    NASA Astrophysics Data System (ADS)

    Nagata, Yuichi

    Niching GAs have been widely investigated to apply genetic algorithms (GAs) to multimodal function optimization problems. In this paper, we suggest a new niching GA that attempts to form niches, each consisting of an equal number of individuals. The proposed GA can be applied also to combinatorial optimization problems by defining a distance metric in the search space. We apply the proposed GA to the job-shop scheduling problem (JSP) and demonstrate that the proposed niching method enhances the ability to maintain niches and improve the performance of GAs.

  11. A Filtered Database Search Algorithm for Endogenous Serum Protein Carbonyl Modifications in a Mouse Model of Inflammation*

    PubMed Central

    Slade, Peter G.; Williams, Michelle V.; Chiang, Alison; Iffrig, Elizabeth; Tannenbaum, Steven R.; Wishnok, John S.

    2011-01-01

    During inflammation, the resulting oxidative stress can damage surrounding host tissue, forming protein-carbonyls. The SJL mouse is an experimental animal model used to assess in vivo toxicological responses to reactive oxygen and nitrogen species from inflammation. The goals of this study were to identify the major serum proteins modified with a carbonyl functionality and to identify the types of carbonyl adducts. To select for carbonyl-modified proteins, serum proteins were reacted with an aldehyde reactive probe that biotinylated the carbonyl modification. Modified proteins were enriched by avidin affinity and identified by two-dimensional liquid chromatography tandem MS. To identify the carbonyl modification, tryptic peptides from serum proteins were subjected to avidin affinity and the enriched modified peptides were analyzed by liquid chromatography tandem MS. It was noted that the aldehyde reactive probe tag created tag-specific fragment ions and neutral losses, and these extra features in the mass spectra inhibited identification of the modified peptides by database searching. To enhance the identification of carbonyl-modified peptides, a program was written that used the tag-specific fragment ions as a fingerprint (in silico filter program) and filtered the mass spectrometry data to highlight only modified peptides. A de novo-like database search algorithm was written (biotin peptide identification program) to identify the carbonyl-modified peptides. Although written specifically for our experiments, this software can be adapted to other modification and enrichment systems. Using these routines, a number of lipid peroxidation-derived protein carbonyls and direct side-chain oxidation proteins carbonyls were identified in SJL mouse serum. PMID:21768395

  12. The rational search for selective anticancer derivatives of the peptide Trichogin GA IV: a multi-technique biophysical approach

    PubMed Central

    Dalzini, Annalisa; Bergamini, Christian; Biondi, Barbara; De Zotti, Marta; Panighel, Giacomo; Fato, Romana; Peggion, Cristina; Bortolus, Marco; Maniero, Anna Lisa

    2016-01-01

    Peptaibols are peculiar peptides produced by fungi as weapons against other microorganisms. Previous studies showed that peptaibols are promising peptide-based drugs because they act against cell membranes rather than a specific target, thus lowering the possibility of the onset of multi-drug resistance, and they possess non-coded α-amino acid residues that confer proteolytic resistance. Trichogin GA IV (TG) is a short peptaibol displaying antimicrobial and cytotoxic activity. In the present work, we studied thirteen TG analogues, adopting a multidisciplinary approach. We showed that the cytotoxicity is tuneable by single amino-acids substitutions. Many analogues maintain the same level of non-selective cytotoxicity of TG and three analogues are completely non-toxic. Two promising lead compounds, characterized by the introduction of a positively charged unnatural amino-acid in the hydrophobic face of the helix, selectively kill T67 cancer cells without affecting healthy cells. To explain the determinants of the cytotoxicity, we investigated the structural parameters of the peptides, their cell-binding properties, cell localization, and dynamics in the membrane, as well as the cell membrane composition. We show that, while cytotoxicity is governed by the fine balance between the amphipathicity and hydrophobicity, the selectivity depends also on the expression of negatively charged phospholipids on the cell surface. PMID:27039838

  13. The Sloan Digital Sky Survey-II Supernova Survey:Search Algorithm and Follow-up Observations

    SciTech Connect

    Sako, Masao; Bassett, Bruce; Becker, Andrew; Cinabro, David; DeJongh, Don Frederic; Depoy, D.L.; Doi, Mamoru; Garnavich, Peter M.; Craig, Hogan, J.; Holtzman, Jon; Jha, Saurabh; Konishi, Kohki; Lampeitl, Hubert; Marriner, John; Miknaitis, Gajus; Nichol, Robert C.; Prieto, Jose Luis; Richmond, Michael W.; Schneider, Donald P.; Smith, Mathew; SubbaRao, Mark; /Chicago U. /Tokyo U. /Tokyo U. /South African Astron. Observ. /Tokyo U. /Apache Point Observ. /Seoul Natl. U. /Apache Point Observ. /Apache Point Observ. /Tokyo U. /Seoul Natl. U. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ. /Apache Point Observ.

    2007-09-14

    The Sloan Digital Sky Survey-II Supernova Survey has identified a large number of new transient sources in a 300 deg2 region along the celestial equator during its first two seasons of a three-season campaign. Multi-band (ugriz) light curves were measured for most of the sources, which include solar system objects, Galactic variable stars, active galactic nuclei, supernovae (SNe), and other astronomical transients. The imaging survey is augmented by an extensive spectroscopic follow-up program to identify SNe, measure their redshifts, and study the physical conditions of the explosions and their environment through spectroscopic diagnostics. During the survey, light curves are rapidly evaluated to provide an initial photometric type of the SNe, and a selected sample of sources are targeted for spectroscopic observations. In the first two seasons, 476 sources were selected for spectroscopic observations, of which 403 were identified as SNe. For the Type Ia SNe, the main driver for the Survey, our photometric typing and targeting efficiency is 90%. Only 6% of the photometric SN Ia candidates were spectroscopically classified as non-SN Ia instead, and the remaining 4% resulted in low signal-to-noise, unclassified spectra. This paper describes the search algorithm and the software, and the real-time processing of the SDSS imaging data. We also present the details of the supernova candidate selection procedures and strategies for follow-up spectroscopic and imaging observations of the discovered sources.

  14. Design of multiplier-less sharp transition width non-uniform filter banks using gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Bindiya T., S.; Elias, Elizabeth

    2015-01-01

    In this paper, multiplier-less near-perfect reconstruction tree-structured filter banks are proposed. Filters with sharp transition width are preferred in filter banks in order to reduce the aliasing between adjacent channels. When sharp transition width filters are designed as conventional finite impulse response filters, the order of the filters will become very high leading to increased complexity. The frequency response masking (FRM) method is known to result in linear-phase sharp transition width filters with low complexity. It is found that the proposed design method, which is based on FRM, gives better results compared to the earlier reported results, in terms of the number of multipliers when sharp transition width filter banks are needed. To further reduce the complexity and power consumption, the tree-structured filter bank is made totally multiplier-less by converting the continuous filter bank coefficients to finite precision coefficients in the signed power of two space. This may lead to performance degradation and calls for the use of a suitable optimisation technique. In this paper, gravitational search algorithm is proposed to be used in the design of the multiplier-less tree-structured uniform as well as non-uniform filter banks. This design method results in uniform and non-uniform filter banks which are simple, alias-free, linear phase and multiplier-less and have sharp transition width.

  15. Grid-based algorithm to search critical points, in the electron density, accelerated by graphics processing units.

    PubMed

    Hernández-Esparza, Raymundo; Mejía-Chica, Sol-Milena; Zapata-Escobar, Andy D; Guevara-García, Alfredo; Martínez-Melchor, Apolinar; Hernández-Pérez, Julio-M; Vargas, Rubicelia; Garza, Jorge

    2014-12-01

    Using a grid-based method to search the critical points in electron density, we show how to accelerate such a method with graphics processing units (GPUs). When the GPU implementation is contrasted with that used on central processing units (CPUs), we found a large difference between the time elapsed by both implementations: the smallest time is observed when GPUs are used. We tested two GPUs, one related with video games and other used for high-performance computing (HPC). By the side of the CPUs, two processors were tested, one used in common personal computers and other used for HPC, both of last generation. Although our parallel algorithm scales quite well on CPUs, the same implementation on GPUs runs around 10× faster than 16 CPUs, with any of the tested GPUs and CPUs. We have found what one GPU dedicated for video games can be used without any problem for our application, delivering a remarkable performance, in fact; this GPU competes against one HPC GPU, in particular when single-precision is used. PMID:25345784

  16. TESS: a geometric hashing algorithm for deriving 3D coordinate templates for searching structural databases. Application to enzyme active sites.

    PubMed

    Wallace, A C; Borkakoti, N; Thornton, J M

    1997-11-01

    It is well established that sequence templates such as those in the PROSITE and PRINTS databases are powerful tools for predicting the biological function and tertiary structure for newly derived protein sequences. The number of X-ray and NMR protein structures is increasing rapidly and it is apparent that a 3D equivalent of the sequence templates is needed. Here, we describe an algorithm called TESS that automatically derives 3D templates from structures deposited in the Brookhaven Protein Data Bank. While a new sequence can be searched for sequence patterns, a new structure can be scanned against these 3D templates to identify functional sites. As examples, 3D templates are derived for enzymes with an O-His-O "catalytic triad" and for the ribonucleases and lysozymes. When these 3D templates are applied to a large data set of nonidentical proteins, several interesting hits are located. This suggests that the development of a 3D template database may help to identify the function of new protein structures, if unknown, as well as to design proteins with specific functions.

  17. Realization of a quantum gate using gravitational search algorithm by perturbing three-dimensional harmonic oscillator with an electromagnetic field

    NASA Astrophysics Data System (ADS)

    Sharma, Navneet; Rawat, Tarun Kumar; Parthasarathy, Harish; Gautam, Kumar

    2016-06-01

    The aim of this paper is to design a current source obtained as a representation of p information symbols \\{I_k\\} so that the electromagnetic (EM) field generated interacts with a quantum atomic system producing after a fixed duration T a unitary gate U( T) that is as close as possible to a given unitary gate U_g. The design procedure involves calculating the EM field produced by \\{I_k\\} and hence the perturbing Hamiltonian produced by \\{I_k\\} finally resulting in the evolution operator produced by \\{I_k\\} up to cubic order based on the Dyson series expansion. The gate error energy is thus obtained as a cubic polynomial in \\{I_k\\} which is minimized using gravitational search algorithm. The signal to noise ratio (SNR) in the designed gate is higher as compared to that using quadratic Dyson series expansion. The SNR is calculated as the ratio of the Frobenius norm square of the desired gate to that of the desired gate error.

  18. Global Warming: Predicting OPEC Carbon Dioxide Emissions from Petroleum Consumption Using Neural Network and Hybrid Cuckoo Search Algorithm

    PubMed Central

    Chiroma, Haruna; Abdul-kareem, Sameem; Khan, Abdullah; Nawi, Nazri Mohd.; Gital, Abdulsalam Ya’u; Shuib, Liyana; Abubakar, Adamu I.; Rahman, Muhammad Zubair; Herawan, Tutut

    2015-01-01

    Background Global warming is attracting attention from policy makers due to its impacts such as floods, extreme weather, increases in temperature by 0.7°C, heat waves, storms, etc. These disasters result in loss of human life and billions of dollars in property. Global warming is believed to be caused by the emissions of greenhouse gases due to human activities including the emissions of carbon dioxide (CO2) from petroleum consumption. Limitations of the previous methods of predicting CO2 emissions and lack of work on the prediction of the Organization of the Petroleum Exporting Countries (OPEC) CO2 emissions from petroleum consumption have motivated this research. Methods/Findings The OPEC CO2 emissions data were collected from the Energy Information Administration. Artificial Neural Network (ANN) adaptability and performance motivated its choice for this study. To improve effectiveness of the ANN, the cuckoo search algorithm was hybridised with accelerated particle swarm optimisation for training the ANN to build a model for the prediction of OPEC CO2 emissions. The proposed model predicts OPEC CO2 emissions for 3, 6, 9, 12 and 16 years with an improved accuracy and speed over the state-of-the-art methods. Conclusion An accurate prediction of OPEC CO2 emissions can serve as a reference point for propagating the reorganisation of economic development in OPEC member countries with the view of reducing CO2 emissions to Kyoto benchmarks—hence, reducing global warming. The policy implications are discussed in the paper. PMID:26305483

  19. Body-centered cubic and face-centered cubic meshes for optimization of turbulent flow systems based on pattern search algorithms

    NASA Astrophysics Data System (ADS)

    Bewley, Thomas R.

    2004-11-01

    Multidimensional optimization problems are most easily solved when derivative (that is, gradient, and sometimes Hessian) information can be computed or approximated. However, when the function to be minimized is inherently noisy, such as a statistical measure of a computer simulations of a turbulent flow, it is often not feasible to perform derivative-based optimizations. In such cases, derivative-free (i.e., function-based) optimization strategies are preferred. In such optimizations, a mesh is often used to coordinate the search. A general class of such optimization algorithms for which convergence proofs are available is called generalized pattern search (GPS) algorithms, in which a positive basis is used to define the local pattern of test points at each poll step of the iterative search. This positive basis is selected from many possible choices based on the several vectors from the current candidate minimum point (CMP) to the neighboring points on the N-dimensional mesh being used to coordinate the search. However, the positive basis so selected is often distributed nonuniformly in parameter space, based on points other than the nearest neighbors to the CMP, or requires more new function evaluations than necessary at any given poll step. Such shortcomings can significantly reduce the polling efficiency, thereby slowing convergence. The present paper proposes two new meshing strategies for GPS algorithms that are designed to mitigate these shortcomings.

  20. An efficient fitness function in genetic algorithm classifier for Landuse recognition on satellite images.

    PubMed

    Yang, Ming-Der; Yang, Yeh-Fen; Su, Tung-Ching; Huang, Kai-Siang

    2014-01-01

    Genetic algorithm (GA) is designed to search the optimal solution via weeding out the worse gene strings based on a fitness function. GA had demonstrated effectiveness in solving the problems of unsupervised image classification, one of the optimization problems in a large domain. Many indices or hybrid algorithms as a fitness function in a GA classifier are built to improve the classification accuracy. This paper proposes a new index, DBFCMI, by integrating two common indices, DBI and FCMI, in a GA classifier to improve the accuracy and robustness of classification. For the purpose of testing and verifying DBFCMI, well-known indices such as DBI, FCMI, and PASI are employed as well for comparison. A SPOT-5 satellite image in a partial watershed of Shihmen reservoir is adopted as the examined material for landuse classification. As a result, DBFCMI acquires higher overall accuracy and robustness than the rest indices in unsupervised classification.

  1. An Efficient Fitness Function in Genetic Algorithm Classifier for Landuse Recognition on Satellite Images

    PubMed Central

    Yang, Yeh-Fen; Su, Tung-Ching; Huang, Kai-Siang

    2014-01-01

    Genetic algorithm (GA) is designed to search the optimal solution via weeding out the worse gene strings based on a fitness function. GA had demonstrated effectiveness in solving the problems of unsupervised image classification, one of the optimization problems in a large domain. Many indices or hybrid algorithms as a fitness function in a GA classifier are built to improve the classification accuracy. This paper proposes a new index, DBFCMI, by integrating two common indices, DBI and FCMI, in a GA classifier to improve the accuracy and robustness of classification. For the purpose of testing and verifying DBFCMI, well-known indices such as DBI, FCMI, and PASI are employed as well for comparison. A SPOT-5 satellite image in a partial watershed of Shihmen reservoir is adopted as the examined material for landuse classification. As a result, DBFCMI acquires higher overall accuracy and robustness than the rest indices in unsupervised classification. PMID:24701151

  2. Deconstructing Nowicki and Smutnickis i-TSAB tabu search algorithm for the job-shop scheduling problem.

    SciTech Connect

    Whitley, L. Darrell; Watson, Jean-Paul; Howe, Adele E.

    2005-06-01

    Over the last decade and a half, tabu search algorithms for machine scheduling have gained a near-mythical reputation by consistently equaling or establishing state-of-the-art performance levels on a range of academic and real-world problems. Yet, despite these successes, remarkably little research has been devoted to developing an understanding of why tabu search is so effective on this problem class. In this paper, we report results that provide significant progress in this direction. We consider Nowicki and Smutnicki's i-TSAB tabu search algorithm, which represents the current state-of-the-art for the makespan-minimization form of the classical jobshop scheduling problem. Via a series of controlled experiments, we identify those components of i-TSAB that enable it to achieve state-of-the-art performance levels. In doing so, we expose a number of misconceptions regarding the behavior and/or benefits of tabu search and other local search metaheuristics for the job-shop problem. Our results also serve to focus future research, by identifying those specific directions that are most likely to yield further improvements in performance.

  3. Multidisciplinary design optimization using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1994-01-01

    Multidisciplinary design optimization (MDO) is an important step in the conceptual design and evaluation of launch vehicles since it can have a significant impact on performance and life cycle cost. The objective is to search the system design space to determine values of design variables that optimize the performance characteristic subject to system constraints. Gradient-based optimization routines have been used extensively for aerospace design optimization. However, one limitation of gradient based optimizers is their need for gradient information. Therefore, design problems which include discrete variables can not be studied. Such problems are common in launch vehicle design. For example, the number of engines and material choices must be integer values or assume only a few discrete values. In this study, genetic algorithms are investigated as an approach to MDO problems involving discrete variables and discontinuous domains. Optimization by genetic algorithms (GA) uses a search procedure which is fundamentally different from those gradient based methods. Genetic algorithms seek to find good solutions in an efficient and timely manner rather than finding the best solution. GA are designed to mimic evolutionary selection. A population of candidate designs is evaluated at each iteration, and each individual's probability of reproduction (existence in the next generation) depends on its fitness value (related to the value of the objective function). Progress toward the optimum is achieved by the crossover and mutation operations. GA is attractive since it uses only objective function values in the search process, so gradient calculations are avoided. Hence, GA are able to deal with discrete variables. Studies report success in the use of GA for aircraft design optimization studies, trajectory analysis, space structure design and control systems design. In these studies reliable convergence was achieved, but the number of function evaluations was large compared

  4. Using and comparing metaheuristic algorithms for optimizing bidding strategy viewpoint of profit maximization of generators

    NASA Astrophysics Data System (ADS)

    Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan

    2015-12-01

    With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.

  5. Features extraction of flotation froth images and BP neural network soft-sensor model of concentrate grade optimized by shuffled cuckoo searching algorithm.

    PubMed

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210

  6. Features extraction of flotation froth images and BP neural network soft-sensor model of concentrate grade optimized by shuffled cuckoo searching algorithm.

    PubMed

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy.

  7. Reliability-based design optimization of reinforced concrete structures including soil-structure interaction using a discrete gravitational search algorithm and a proposed metamodel

    NASA Astrophysics Data System (ADS)

    Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.

    2013-10-01

    A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.

  8. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210

  9. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  10. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    PubMed Central

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  11. Improved Power System Stability Using Backtracking Search Algorithm for Coordination Design of PSS and TCSC Damping Controller.

    PubMed

    Niamul Islam, Naz; Hannan, M A; Mohamed, Azah; Shareef, Hussain

    2016-01-01

    Power system oscillation is a serious threat to the stability of multimachine power systems. The coordinated control of power system stabilizers (PSS) and thyristor-controlled series compensation (TCSC) damping controllers is a commonly used technique to provide the required damping over different modes of growing oscillations. However, their coordinated design is a complex multimodal optimization problem that is very hard to solve using traditional tuning techniques. In addition, several limitations of traditionally used techniques prevent the optimum design of coordinated controllers. In this paper, an alternate technique for robust damping over oscillation is presented using backtracking search algorithm (BSA). A 5-area 16-machine benchmark power system is considered to evaluate the design efficiency. The complete design process is conducted in a linear time-invariant (LTI) model of a power system. It includes the design formulation into a multi-objective function from the system eigenvalues. Later on, nonlinear time-domain simulations are used to compare the damping performances for different local and inter-area modes of power system oscillations. The performance of the BSA technique is compared against that of the popular particle swarm optimization (PSO) for coordinated design efficiency. Damping performances using different design techniques are compared in term of settling time and overshoot of oscillations. The results obtained verify that the BSA-based design improves the system stability significantly. The stability of the multimachine power system is improved by up to 74.47% and 79.93% for an inter-area mode and a local mode of oscillation, respectively. Thus, the proposed technique for coordinated design has great potential to improve power system stability and to maintain its secure operation. PMID:26745265

  12. Improved Power System Stability Using Backtracking Search Algorithm for Coordination Design of PSS and TCSC Damping Controller

    PubMed Central

    Niamul Islam, Naz; Hannan, M. A.; Mohamed, Azah; Shareef, Hussain

    2016-01-01

    Power system oscillation is a serious threat to the stability of multimachine power systems. The coordinated control of power system stabilizers (PSS) and thyristor-controlled series compensation (TCSC) damping controllers is a commonly used technique to provide the required damping over different modes of growing oscillations. However, their coordinated design is a complex multimodal optimization problem that is very hard to solve using traditional tuning techniques. In addition, several limitations of traditionally used techniques prevent the optimum design of coordinated controllers. In this paper, an alternate technique for robust damping over oscillation is presented using backtracking search algorithm (BSA). A 5-area 16-machine benchmark power system is considered to evaluate the design efficiency. The complete design process is conducted in a linear time-invariant (LTI) model of a power system. It includes the design formulation into a multi-objective function from the system eigenvalues. Later on, nonlinear time-domain simulations are used to compare the damping performances for different local and inter-area modes of power system oscillations. The performance of the BSA technique is compared against that of the popular particle swarm optimization (PSO) for coordinated design efficiency. Damping performances using different design techniques are compared in term of settling time and overshoot of oscillations. The results obtained verify that the BSA-based design improves the system stability significantly. The stability of the multimachine power system is improved by up to 74.47% and 79.93% for an inter-area mode and a local mode of oscillation, respectively. Thus, the proposed technique for coordinated design has great potential to improve power system stability and to maintain its secure operation. PMID:26745265

  13. A non-device-specific approach to display characterization based on linear, nonlinear, and hybrid search algorithms.

    PubMed

    Ban, Hiroshi; Yamamoto, Hiroki

    2013-01-01

    In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free.

  14. A non-device-specific approach to display characterization based on linear, nonlinear, and hybrid search algorithms.

    PubMed

    Ban, Hiroshi; Yamamoto, Hiroki

    2013-01-01

    In almost all of the recent vision experiments, stimuli are controlled via computers and presented on display devices such as cathode ray tubes (CRTs). Display characterization is a necessary procedure for such computer-aided vision experiments. The standard display characterization called "gamma correction" and the following linear color transformation procedure are established for CRT displays and widely used in the current vision science field. However, the standard two-step procedure is based on the internal model of CRT display devices, and there is no guarantee as to whether the method is applicable to the other types of display devices such as liquid crystal display and digital light processing. We therefore tested the applicability of the standard method to these kinds of new devices and found that the standard method was not valid for these new devices. To overcome this problem, we provide several novel approaches for vision experiments to characterize display devices, based on linear, nonlinear, and hybrid search algorithms. These approaches never assume any internal models of display devices and will therefore be applicable to any display type. The evaluations and comparisons of chromaticity estimation accuracies based on these new methods with those of the standard procedure proved that our proposed methods largely improved the calibration efficiencies for non-CRT devices. Our proposed methods, together with the standard one, have been implemented in a MATLAB-based integrated graphical user interface software named Mcalibrator2. This software can enhance the accuracy of vision experiments and enable more efficient display characterization procedures. The software is now available publicly for free. PMID:23729771

  15. Improved ant algorithms for software testing cases generation.

    PubMed

    Yang, Shunkun; Man, Tianlong; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to produce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations.

  16. Improved Ant Algorithms for Software Testing Cases Generation

    PubMed Central

    Yang, Shunkun; Xu, Jiaqi

    2014-01-01

    Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391

  17. Adding energy minimization strategy to peptide-design algorithm enables better search for RNA-binding peptides: Redesigned λ N peptide binds boxB RNA.

    PubMed

    Xiao, Xingqing; Hung, Michelle E; Leonard, Joshua N; Hall, Carol K

    2016-10-15

    Our previously developed peptide-design algorithm was improved by adding an energy minimization strategy which allows the amino acid sidechains to move in a broad configuration space during sequence evolution. In this work, the new algorithm was used to generate a library of 21-mer peptides which could substitute for λ N peptide in binding to boxB RNA. Six potential peptides were obtained from the algorithm, all of which exhibited good binding capability with boxB RNA. Atomistic molecular dynamics simulations were then conducted to examine the ability of the λ N peptide and three best evolved peptides, viz. Pept01, Pept26, and Pept28, to bind to boxB RNA. Simulation results demonstrated that our evolved peptides are better at binding to boxB RNA than the λ N peptide. Sequence searches using the old (without energy minimization strategy) and new (with energy minimization strategy) algorithms confirm that the new algorithm is more effective at finding good RNA-binding peptides than the old algorithm. © 2016 Wiley Periodicals, Inc.

  18. BMI optimization by using parallel UNDX real-coded genetic algorithm with Beowulf cluster

    NASA Astrophysics Data System (ADS)

    Handa, Masaya; Kawanishi, Michihiro; Kanki, Hiroshi

    2007-12-01

    This paper deals with the global optimization algorithm of the Bilinear Matrix Inequalities (BMIs) based on the Unimodal Normal Distribution Crossover (UNDX) GA. First, analyzing the structure of the BMIs, the existence of the typical difficult structures is confirmed. Then, in order to improve the performance of algorithm, based on results of the problem structures analysis and consideration of BMIs characteristic properties, we proposed the algorithm using primary search direction with relaxed Linear Matrix Inequality (LMI) convex estimation. Moreover, in these algorithms, we propose two types of evaluation methods for GA individuals based on LMI calculation considering BMI characteristic properties more. In addition, in order to reduce computational time, we proposed parallelization of RCGA algorithm, Master-Worker paradigm with cluster computing technique.

  19. OPC recipe optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Asthana, Abhishek; Wilkinson, Bill; Power, Dave

    2016-03-01

    Optimization of OPC recipes is not trivial due to multiple parameters that need tuning and their correlation. Usually, no standard methodologies exist for choosing the initial recipe settings, and in the keyword development phase, parameters are chosen either based on previous learning, vendor recommendations, or to resolve specific problems on particular special constructs. Such approaches fail to holistically quantify the effects of parameters on other or possible new designs, and to an extent are based on the keyword developer's intuition. In addition, when a quick fix is needed for a new design, numerous customization statements are added to the recipe, which make it more complex. The present work demonstrates the application of Genetic Algorithm (GA) technique for optimizing OPC recipes. GA is a search technique that mimics Darwinian natural selection and has applications in various science and engineering disciplines. In this case, GA search heuristic is applied to two problems: (a) an overall OPC recipe optimization with respect to selected parameters and, (b) application of GA to improve printing and via coverage at line end geometries. As will be demonstrated, the optimized recipe significantly reduced the number of ORC violations for case (a). For case (b) line end for various features showed significant printing and filling improvement.

  20. A hybrid, auto-adaptive and rule-based multi-agent approach using evolutionary algorithms for improved searching

    NASA Astrophysics Data System (ADS)

    Izquierdo, Joaquín; Montalvo, Idel; Campbell, Enrique; Pérez-García, Rafael

    2016-08-01

    Selecting the most appropriate heuristic for solving a specific problem is not easy, for many reasons. This article focuses on one of these reasons: traditionally, the solution search process has operated in a given manner regardless of the specific problem being solved, and the process has been the same regardless of the size, complexity and domain of the problem. To cope with this situation, search processes should mould the search into areas of the search space that are meaningful for the problem. This article builds on previous work in the development of a multi-agent paradigm using techniques derived from knowledge discovery (data-mining techniques) on databases of so-far visited solutions. The aim is to improve the search mechanisms, increase computational efficiency and use rules to enrich the formulation of optimization problems, while reducing the search space and catering to realistic problems.

  1. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; An Iterative Decoding Algorithm for Linear Block Codes Based on a Low-Weight Trellis Search

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.

  2. Multi-frequency based location search algorithm of small electromagnetic inhomogeneities embedded in two-layered medium

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang; Park, Taehoon

    2013-07-01

    In this paper, we consider a problem for finding the locations of electromagnetic inhomogeneities completely embedded in homogeneous two layered medium. For this purpose, we present a filter function operated at several frequencies and design an algorithm for finding the locations of such inhomogeneities. It is based on the fact that, the collected Multi-Static Response (MSR) matrix can be modeled via a rigorous asymptotic expansion formula of the scattering amplitude due to the presence of such inhomogeneities. In order to show the effectiveness, we compare the proposed algorithm with traditional MUltiple SIgnal Classification (MUSIC) algorithm and Kirchhoff migration. Various numerical results demonstrate that the proposed algorithm is robust with respect to random noise and yields more accurate location than the MUSIC algorithm and Kirchhoff migration.

  3. Scheduling with genetic algorithms

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.

    1994-01-01

    In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.

  4. Improved interpretation of satellite altimeter data using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Messa, Kenneth; Lybanon, Matthew

    1992-01-01

    Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.

  5. Double global optimum genetic algorithm-particle swarm optimization-based welding robot path planning

    NASA Astrophysics Data System (ADS)

    Wang, Xuewu; Shi, Yingpan; Ding, Dongyan; Gu, Xingsheng

    2016-02-01

    Spot-welding robots have a wide range of applications in manufacturing industries. There are usually many weld joints in a welding task, and a reasonable welding path to traverse these weld joints has a significant impact on welding efficiency. Traditional manual path planning techniques can handle a few weld joints effectively, but when the number of weld joints is large, it is difficult to obtain the optimal path. The traditional manual path planning method is also time consuming and inefficient, and cannot guarantee optimality. Double global optimum genetic algorithm-particle swarm optimization (GA-PSO) based on the GA and PSO algorithms is proposed to solve the welding robot path planning problem, where the shortest collision-free paths are used as the criteria to optimize the welding path. Besides algorithm effectiveness analysis and verification, the simulation results indicate that the algorithm has strong searching ability and practicality, and is suitable for welding robot path planning.

  6. Phase Reconstruction from FROG Using Genetic Algorithms[Frequency-Resolved Optical Gating

    SciTech Connect

    Omenetto, F.G.; Nicholson, J.W.; Funk, D.J.; Taylor, A.J.

    1999-04-12

    The authors describe a new technique for obtaining the phase and electric field from FROG measurements using genetic algorithms. Frequency-Resolved Optical Gating (FROG) has gained prominence as a technique for characterizing ultrashort pulses. FROG consists of a spectrally resolved autocorrelation of the pulse to be measured. Typically a combination of iterative algorithms is used, applying constraints from experimental data, and alternating between the time and frequency domain, in order to retrieve an optical pulse. The authors have developed a new approach to retrieving the intensity and phase from FROG data using a genetic algorithm (GA). A GA is a general parallel search technique that operates on a population of potential solutions simultaneously. Operators in a genetic algorithm, such as crossover, selection, and mutation are based on ideas taken from evolution.

  7. A hybrid genetic—instance based learning algorithm for CE-QUAL-W2 calibration

    NASA Astrophysics Data System (ADS)

    Ostfeld, Avi; Salomons, Shani

    2005-08-01

    This paper presents a calibration model for CE-QUAL-W2. CE-QUAL-W2 is a two-dimensional (2D) longitudinal/vertical hydrodynamic and water quality model for surface water bodies, modeling eutrophication processes such as temperature-nutrient-algae-dissolved oxygen-organic matter and sediment relationships. The proposed methodology is a combination of a 'hurdle-race' and a hybrid Genetic- k-Nearest Neighbor algorithm (GA-kNN). The 'hurdle race' is formulated for accepting-rejecting a proposed set of parameters during a CE-QUAL-W2 simulation; the k-Nearest Neighbor algorithm (kNN)—for approximating the objective function response surface; and the Genetic Algorithm (GA)—for linking both. The proposed methodology overcomes the high, non-applicable, computational efforts required if a conventional calibration search technique was used, while retaining the quality of the final calibration results. Base runs and sensitivity analysis are demonstrated on two example applications: a synthetic hypothetical example calibrated for temperature, serving for tuning the GA-kNN parameters; and the Lower Columbia Slough case study in Oregon US calibrated for temperature and dissolved oxygen. The GA-kNN algorithm was found to be robust and reliable, producing similar results to those of a pure GA, while reducing running times and computational efforts significantly, and adding additional insights and flexibilities to the calibration process.

  8. A comparative reference study for the validation of HLA-matching algorithms in the search for allogeneic hematopoietic stem cell donors and cord blood units.

    PubMed

    Bochtler, W; Gragert, L; Patel, Z I; Robinson, J; Steiner, D; Hofmann, J A; Pingel, J; Baouz, A; Melis, A; Schneider, J; Eberhard, H-P; Oudshoorn, M; Marsh, S G E; Maiers, M; Müller, C R

    2016-06-01

    The accuracy of human leukocyte antigen (HLA)-matching algorithms is a prerequisite for the correct and efficient identification of optimal unrelated donors for patients requiring hematopoietic stem cell transplantation. The goal of this World Marrow Donor Association study was to validate established matching algorithms from different international donor registries by challenging them with simulated input data and subsequently comparing the output. This experiment addressed three specific aspects of HLA matching using different data sets for tasks of increasing complexity. The first two tasks targeted the traditional matching approach identifying discrepancies between patient and donor HLA genotypes by counting antigen and allele differences. Contemporary matching procedures predicting the probability for HLA identity using haplotype frequencies were addressed by the third task. In each task, the identified disparities between the results of the participating computer programs were analyzed, classified and quantified. This study led to a deep understanding of the algorithms participating and finally produced virtually identical results. The unresolved discrepancies total to less than 1%, 4% and 2% for the three tasks and are mostly because of individual decisions in the design of the programs. Based on these findings, reference results for the three input data sets were compiled that can be used to validate future matching algorithms and thus improve the quality of the global donor search process. PMID:27219013

  9. Optimization of filtering criterion for SEQUEST database searching to improve proteome coverage in shotgun proteomics

    PubMed Central

    Jiang, Xinning; Jiang, Xiaogang; Han, Guanghui; Ye, Mingliang; Zou, Hanfa

    2007-01-01

    Background In proteomic analysis, MS/MS spectra acquired by mass spectrometer are assigned to peptides by database searching algorithms such as SEQUEST. The assignations of peptides to MS/MS spectra by SEQUEST searching algorithm are defined by several scores including Xcorr, ΔCn, Sp, Rsp, matched ion count and so on. Filtering criterion using several above scores is used to isolate correct identifications from random assignments. However, the filtering criterion was not favorably optimized up to now. Results In this study, we implemented a machine learning approach known as predictive genetic algorithm (GA) for the optimization of filtering criteria to maximize the number of identified peptides at fixed false-discovery rate (FDR) for SEQUEST database searching. As the FDR was directly determined by decoy database search scheme, the GA based optimization approach did not require any pre-knowledge on the characteristics of the data set, which represented significant advantages over statistical approaches such as PeptideProphet. Compared with PeptideProphet, the GA based approach can achieve similar performance in distinguishing true from false assignment with only 1/10 of the processing time. Moreover, the GA based approach can be easily extended to process other database search results as it did not rely on any assumption on the data. Conclusion Our results indicated that filtering criteria should be optimized individually for different samples. The new developed software using GA provides a convenient and fast way to create tailored optimal criteria for different proteome samples to improve proteome coverage. PMID:17761002

  10. 3D resistivity inversion using an improved Genetic Algorithm based on control method of mutation direction

    NASA Astrophysics Data System (ADS)

    Liu, B.; Li, S. C.; Nie, L. C.; Wang, J.; L, X.; Zhang, Q. S.

    2012-12-01

    Traditional inversion method is the most commonly used procedure for three-dimensional (3D) resistivity inversion, which usually takes the linearization of the problem and accomplish it by iterations. However, its accuracy is often dependent on the initial model, which can make the inversion trapped in local optima, even cause a bad result. Non-linear method is a feasible way to eliminate the dependence on the initial model. However, for large problems such as 3D resistivity inversion with inversion parameters exceeding a thousand, main challenges of non-linear method are premature and quite low search efficiency. To deal with these problems, we present an improved Genetic Algorithm (GA) method. In the improved GA method, smooth constraint and inequality constraint are both applied on the object function, by which the degree of non-uniqueness and ill-conditioning is decreased. Some measures are adopted from others by reference to maintain the diversity and stability of GA, e.g. real-coded method, and the adaptive adjustment of crossover and mutation probabilities. Then a generation method of approximately uniform initial population is proposed in this paper, with which uniformly distributed initial generation can be produced and the dependence on initial model can be eliminated. Further, a mutation direction control method is presented based on the joint algorithm, in which the linearization method is embedded in GA. The update vector produced by linearization method is used as mutation increment to maintain a better search direction compared with the traditional GA with non-controlled mutation operation. By this method, the mutation direction is optimized and the search efficiency is improved greatly. The performance of improved GA is evaluated by comparing with traditional inversion results in synthetic example or with drilling columnar sections in practical example. The synthetic and practical examples illustrate that with the improved GA method we can eliminate

  11. Comparing three stochastic search algorithms for computational protein design: Monte Carlo, replica exchange Monte Carlo, and a multistart, steepest-descent heuristic.

    PubMed

    Mignon, David; Simonson, Thomas

    2016-07-15

    Computational protein design depends on an energy function and an algorithm to search the sequence/conformation space. We compare three stochastic search algorithms: a heuristic, Monte Carlo (MC), and a Replica Exchange Monte Carlo method (REMC). The heuristic performs a steepest-descent minimization starting from thousands of random starting points. The methods are applied to nine test proteins from three structural families, with a fixed backbone structure, a molecular mechanics energy function, and with 1, 5, 10, 20, 30, or all amino acids allowed to mutate. Results are compared to an exact, "Cost Function Network" method that identifies the global minimum energy conformation (GMEC) in favorable cases. The designed sequences accurately reproduce experimental sequences in the hydrophobic core. The heuristic and REMC agree closely and reproduce the GMEC when it is known, with a few exceptions. Plain MC performs well for most cases, occasionally departing from the GMEC by 3-4 kcal/mol. With REMC, the diversity of the sequences sampled agrees with exact enumeration where the latter is possible: up to 2 kcal/mol above the GMEC. Beyond, room temperature replicas sample sequences up to 10 kcal/mol above the GMEC, providing thermal averages and a solution to the inverse protein folding problem. © 2016 Wiley Periodicals, Inc. PMID:27197555

  12. Comparing three stochastic search algorithms for computational protein design: Monte Carlo, replica exchange Monte Carlo, and a multistart, steepest-descent heuristic.

    PubMed

    Mignon, David; Simonson, Thomas

    2016-07-15

    Computational protein design depends on an energy function and an algorithm to search the sequence/conformation space. We compare three stochastic search algorithms: a heuristic, Monte Carlo (MC), and a Replica Exchange Monte Carlo method (REMC). The heuristic performs a steepest-descent minimization starting from thousands of random starting points. The methods are applied to nine test proteins from three structural families, with a fixed backbone structure, a molecular mechanics energy function, and with 1, 5, 10, 20, 30, or all amino acids allowed to mutate. Results are compared to an exact, "Cost Function Network" method that identifies the global minimum energy conformation (GMEC) in favorable cases. The designed sequences accurately reproduce experimental sequences in the hydrophobic core. The heuristic and REMC agree closely and reproduce the GMEC when it is known, with a few exceptions. Plain MC performs well for most cases, occasionally departing from the GMEC by 3-4 kcal/mol. With REMC, the diversity of the sequences sampled agrees with exact enumeration where the latter is possible: up to 2 kcal/mol above the GMEC. Beyond, room temperature replicas sample sequences up to 10 kcal/mol above the GMEC, providing thermal averages and a solution to the inverse protein folding problem. © 2016 Wiley Periodicals, Inc.

  13. Quantum searching application in search based software engineering

    NASA Astrophysics Data System (ADS)

    Wu, Nan; Song, FangMin; Li, Xiangdong

    2013-05-01

    The Search Based Software Engineering (SBSE) is widely used in software engineering for identifying optimal solutions. However, there is no polynomial-time complexity solution used in the traditional algorithms for SBSE, and that causes the cost very high. In this paper, we analyze and compare several quantum search algorithms that could be applied for SBSE: quantum adiabatic evolution searching algorithm, fixed-point quantum search (FPQS), quantum walks, and a rapid modified Grover quantum searching method. The Grover's algorithm is thought as the best choice for a large-scaled unstructured data searching and theoretically it can be applicable to any search-space structure and any type of searching problems.

  14. A new damping factor algorithm based on line search of the local minimum point for inverse approach

    NASA Astrophysics Data System (ADS)

    Zhang, Yaqi; Liu, Weijie; Lu, Fang; Zhang, Xiangkui; Hu, Ping

    2013-05-01

    The influence of damping factor on the convergence and computational efficiency of the inverse approach was studied through a series of practical examples. A new selection algorithm of the damping (relaxation) factor which takes into account of both robustness and calculation efficiency is proposed, then the computer program is implemented and tested on Siemens PLM NX | One-Step. The result is compared with the traditional Armijo rule through six examples such as U-beam, square box and cylindrical cup et al, confirming the effectiveness of proposed algorithm.

  15. Ameliorated GA approach for base station planning

    NASA Astrophysics Data System (ADS)

    Wang, Andong; Sun, Hongyue; Wu, Xiaomin

    2011-10-01

    In this paper, we aim at locating base station (BS) rationally to satisfy the most customs by using the least BSs. An ameliorated GA is proposed to search for the optimum solution. In the algorithm, we mesh the area to be planned according to least overlap length derived from coverage radius, bring into isometric grid encoding method to represent BS distribution as well as its number and develop select, crossover and mutation operators to serve our unique necessity. We also construct our comprehensive object function after synthesizing coverage ratio, overlap ratio, population and geographical conditions. Finally, after importing an electronic map of the area to be planned, a recommended strategy draft would be exported correspondingly. We eventually import HongKong, China to simulate and yield a satisfactory solution.

  16. Optimizing coherent anti-Stokes Raman scattering by genetic algorithm controlled pulse shaping

    NASA Astrophysics Data System (ADS)

    Yang, Wenlong; Sokolov, Alexei

    2010-10-01

    The hybrid coherent anti-Stokes Raman scattering (CARS) has been successful applied to fast chemical sensitive detections. As the development of femto-second pulse shaping techniques, it is of great interest to find the optimum pulse shapes for CARS. The optimum pulse shapes should minimize the non-resonant four wave mixing (NRFWM) background and maximize the CARS signal. A genetic algorithm (GA) is developed to make a heuristic searching for optimized pulse shapes, which give the best signal the background ratio. The GA is shown to be able to rediscover the hybrid CARS scheme and find optimized pulse shapes for customized applications by itself.

  17. Genetic Algorithm for Optimization: Preprocessing with n Dimensional Bisection and Error Estimation

    NASA Technical Reports Server (NTRS)

    Sen, S. K.; Shaykhian, Gholam Ali

    2006-01-01

    A knowledge of the appropriate values of the parameters of a genetic algorithm (GA) such as the population size, the shrunk search space containing the solution, crossover and mutation probabilities is not available a priori for a general optimization problem. Recommended here is a polynomial-time preprocessing scheme that includes an n-dimensional bisection and that determines the foregoing parameters before deciding upon an appropriate GA for all problems of similar nature and type. Such a preprocessing is not only fast but also enables us to get the global optimal solution and its reasonably narrow error bounds with a high degree of confidence.

  18. Load Frequency Control of a Two-Area Thermal-Hybrid Power System Using a Novel Quasi-Opposition Harmony Search Algorithm

    NASA Astrophysics Data System (ADS)

    Mahto, Tarkeshwar; Mukherjee, V.

    2016-09-01

    In the present work, a two-area thermal-hybrid interconnected power system, consisting of a thermal unit in one area and a hybrid wind-diesel unit in other area is considered. Capacitive energy storage (CES) and CES with static synchronous series compensator (SSSC) are connected to the studied two-area model to compensate for varying load demand, intermittent output power and area frequency oscillation. A novel quasi-opposition harmony search (QOHS) algorithm is proposed and applied to tune the various tunable parameters of the studied power system model. Simulation study reveals that inclusion of CES unit in both the areas yields superb damping performance for frequency and tie-line power deviation. From the simulation results it is further revealed that inclusion of SSSC is not viable from both technical as well as economical point of view as no considerable improvement in transient performance is noted with its inclusion in the tie-line of the studied power system model. The results presented in this paper demonstrate the potential of the proposed QOHS algorithm and show its effectiveness and robustness for solving frequency and power drift problems of the studied power systems. Binary coded genetic algorithm is taken for sake of comparison.

  19. Robust quantum spatial search

    NASA Astrophysics Data System (ADS)

    Tulsi, Avatar

    2016-07-01

    Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{ln N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}). Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.

  20. Evaluation of a Particle Swarm Algorithm For Biomechanical Optimization

    PubMed Central

    Schutte, Jaco F.; Koh, Byung; Reinbolt, Jeffrey A.; Haftka, Raphael T.; George, Alan D.; Fregly, Benjamin J.

    2006-01-01

    Optimization is frequently employed in biomechanics research to solve system identification problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently-developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm’s global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven mathematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms—a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to previously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units. PMID:16060353

  1. A variant constrained genetic algorithm for solving conditional nonlinear optimal perturbations

    NASA Astrophysics Data System (ADS)

    Zheng, Qin; Sha, Jianxin; Shu, Hang; Lu, Xiaoqing

    2014-01-01

    A variant constrained genetic algorithm (VCGA) for effective tracking of conditional nonlinear optimal perturbations (CNOPs) is presented. Compared with traditional constraint handling methods, the treatment of the constraint condition in VCGA is relatively easy to implement. Moreover, it does not require adjustments to indefinite parameters. Using a hybrid crossover operator and the newly developed multi-ply mutation operator, VCGA improves the performance of GAs. To demonstrate the capability of VCGA to catch CNOPS in non-smooth cases, a partial differential equation, which has "onoff" switches in its forcing term, is employed as the nonlinear model. To search global CNOPs of the nonlinear model, numerical experiments using VCGA, the traditional gradient descent algorithm based on the adjoint method (ADJ), and a GA using tournament selection operation and the niching technique (GA-DEB) were performed. The results with various initial reference states showed that, in smooth cases, all three optimization methods are able to catch global CNOPs. Nevertheless, in non-smooth situations, a large proportion of CNOPs captured by the ADJ are local. Compared with ADJ, the performance of GA-DEB shows considerable improvement, but it is far below VCGA. Further, the impacts of population sizes on both VCGA and GA-DEB were investigated. The results were used to estimate the computation time of VCGA and GA-DEB in obtaining CNOPs. The computational costs for VCGA, GA-DEB and ADJ to catch CNOPs of the nonlinear model are also compared.

  2. Pathway detection from protein interaction networks and gene expression data using color-coding methods and A∗ search algorithms.

    PubMed

    Yeh, Cheng-Yu; Yeh, Hsiang-Yuan; Arias, Carlos Roberto; Soo, Von-Wun

    2012-01-01

    With the large availability of protein interaction networks and microarray data supported, to identify the linear paths that have biological significance in search of a potential pathway is a challenge issue. We proposed a color-coding method based on the characteristics of biological network topology and applied heuristic search to speed up color-coding method. In the experiments, we tested our methods by applying to two datasets: yeast and human prostate cancer networks and gene expression data set. The comparisons of our method with other existing methods on known yeast MAPK pathways in terms of precision and recall show that we can find maximum number of the proteins and perform comparably well. On the other hand, our method is more efficient than previous ones and detects the paths of length 10 within 40 seconds using CPU Intel 1.73 GHz and 1 GB main memory running under windows operating system. PMID:22577352

  3. Search of the Orion spur for continuous gravitational waves using a loosely coherent algorithm on data from LIGO interferometers

    NASA Astrophysics Data System (ADS)

    Aasi, J.; Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Amariutei, D. V.; Andersen, M.; Anderson, S. B.; Anderson, W. G.; Arai, K.; Araya, M. C.; Arceneaux, C. C.; Areeda, J. S.; Arnaud, N.; Ashton, G.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Babak, S.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; Barclay, S. E.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Bartlett, J.; Barton, M. A.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Behnke, B.; Bejger, M.; Belczynski, C.; Bell, A. S.; Berger, B. K.; Bergman, J.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Birch, J.; Birney, R.; Biscans, S.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blair, C. D.; Blair, D.; Bloemen, S.; Bock, O.; Bodiya, T. P.; Boer, M.; Bogaert, G.; Bojtos, P.; Bond, C.; Bondu, F.; Bonnand, R.; Bork, R.; Born, M.; Boschi, V.; Bose, Sukanta; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; Branco, V.; Brau, J. E.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; Brooks, A. F.; Brown, D. A.; Brown, D.; Brown, D. D.; Brown, N. M.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Bustillo, J. Calderón; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Diaz, J. Casanueva; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Celerier, C.; Cella, G.; Cepeda, C.; Baiardi, L. Cerboni; Cerretani, G.; Cesarini, E.; Chakraborty, R.; Chalermsongsak, T.; Chamberlin, S. J.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Chen, X.; Chen, Y.; Cheng, C.; Chincarini, A.; Chiummo, A.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Colombini, M.; Constancio, M.; Conte, A.; Conti, L.; Cook, D.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; Craig, K.; Creighton, J. D. E.; Creighton, T.; Cripe, J.; Crowder, S. G.; Cumming, A.; Cunningham, L.; Cuoco, E.; Canton, T. Dal; Damjanic, M. D.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Darman, N. S.; Dattilo, V.; Dave, I.; Daveloza, H. P.; Davier, M.; Davies, G. S.; Daw, E. J.; Day, R.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dereli, H.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Lieto, A.; Di Palma, I.; Di Virgilio, A.; Dojcinoski, G.; Dolique, V.; Dominguez, E.; Donovan, F.; Dooley, K. L.; Doravari, S.; Douglas, R.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; Dwyer, S. E.; Edo, T. B.; Edwards, M. C.; Edwards, M.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J. M.; Eikenberry, S. S.; Essick, R. C.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fairhurst, S.; Fang, Q.; Farinon, S.; Farr, B.; Farr, W. M.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Feldbaum, D.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fisher, R. P.; Flaminio, R.; Fournier, J.-D.; Franco, S.; Frasca, S.; Frasconi, F.; Frede, M.; Frei, Z.; Freise, A.; Frey, R.; Fricke, T. T.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H. A. G.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gatto, A.; Gehrels, N.; Gemme, G.; Gendre, B.; Genin, E.; Gennai, A.; Gergely, L. Á.; Germain, V.; Ghosh, A.; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gleason, J. R.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez, J.; Gopakumar, A.; Gordon, N. A.; Gorodetsky, M. L.; Gossan, S. E.; Gosselin, M.; Goßler, S.; Gouaty, R.; Graef, C.; Graff, P. B.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Groot, P.; Grote, H.; Grover, K.; Grunewald, S.; Guidi, G. M.; Guido, C. J.; Guo, X.; Gupta, A.; Gupta, M. K.; Gushwa, K. E.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammer, D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hodge, K. A.; Hoelscher-Obermaier, J.; Hofman, D.; Hollitt, S. E.; Holt, K.; Hopkins, P.; Hosken, D. J.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huang, S.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh, M.; Huynh-Dinh, T.; Idrisy, A.; Indik, N.; Ingram, D. R.; Inta, R.; Islas, G.; Isler, J. C.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacobson, M. B.; Jang, H.; Jaranowski, P.; Jawahar, S.; Ji, Y.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Haris, K.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karlen, J. L.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kawazoe, F.; Kéfélian, F.; Kehl, M. S.; Keitel, D.; Kelecsenyi, N.; Kelley, D. B.; Kells, W.; Kerrigan, J.; Key, J. S.; Khalili, F. Y.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, C.; Kim, K.; Kim, N. G.; Kim, N.; Kim, Y.-M.; King, E. J.; King, P. J.; Kinzel, D. L.; Kissel, J. S.; Klimenko, S.; Kline, J. T.; Koehlenbeck, S. M.; Kokeyama, K.; Koley, S.; Kondrashov, V.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Kringel, V.; Krishnan, B.; Królak, A.; Krueger, C.; Kuehn, G.; Kumar, A.; Kumar, P.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lantz, B.; Lasky, P. D.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, J.; Lee, J. P.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Levine, B. M.; Lewis, J. B.; Li, T. G. F.; Libson, A.; Lin, A. C.; Littenberg, T. B.; Lockerbie, N. A.; Lockett, V.; Lodhia, D.; Logue, J.; Lombardi, A. L.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lubinski, M. J.; Lück, H.; Lundgren, A. P.; Luo, J.; Lynch, R.; Ma, Y.; Macarthur, J.; Macdonald, E. P.; MacDonald, T.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Madden-Fong, D. X.; Magaña-Sandoval, F.; Magee, R. M.; Mageswaran, M.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandel, I.; Mandic, V.; Mangano, V.; Mangini, N. M.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martin, R. M.; Martynov, D. V.; Marx, J. N.; Mason, K.; Masserot, A.; Massinger, T. J.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; Mazzolo, G.; McCarthy, R.; McClelland, D. E.; McCormick, S.; McGuire, S. C.; McIntyre, G.; McIver, J.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Mehmet, M.; Meidam, J.; Meinders, M.; Melatos, A.; Mendell, G.; Mercer, R. A.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; Mikhailov, E. E.; Milano, L.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moe, B.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mossavi, K.; Mours, B.; Mow-Lowry, C. M.; Mueller, C. L.; Mueller, G.; Mukherjee, A.; Mukherjee, S.; Mullavey, A.; Munch, J.; Murphy, D. J.; Murray, P. G.; Mytidis, A.; Nagy, M. F.; Nardecchia, I.; Naticchioni, L.; Nayak, R. K.; Necula, V.; Nedkova, K.; Nelemans, G.; Neri, M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nitz, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Ochsner, E.; O'Dell, J.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Okounkova, M.; Oppermann, P.; Oram, R.; O'Reilly, B.; Ortega, W. E.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Padilla, C. T.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pan, Y.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Papa, M. A.; Paris, H. R.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patrick, Z.; Pedraza, M.; Pekowsky, L.; Pele, A.; Penn, S.; Perreca, A.; Phelps, M.; Piccinni, O.; Pichot, M.; Pickenpack, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poeld, J. H.; Poggiani, R.; Post, A.; Powell, J.; Prasad, J.; Predoi, V.; Premachandra, S. S.; Prestegard, T.; Price, L. R.; Prijatelj, M.; Principe, M.; Privitera, S.; Prix, R.; Prodi, G. A.; Prokhorov, L.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qin, J.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Rácz, I.; Radkins, H.; Raffai, P.; Raja, S.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Reed, C. M.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Ricci, F.; Riles, K.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rodger, A. S.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, R.; Romanov, G.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Saleem, M.; Salemi, F.; Sammut, L.; Sanchez, E.; Sandberg, V.; Sanders, J. R.; Santiago-Prieto, I.; Sassolas, B.; Saulson, P. R.; Savage, R.; Sawadsky, A.; Schale, P.; Schilling, R.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Scott, J.; Scott, S. M.; Sellers, D.; Sentenac, D.; Sequino, V.; Sergeev, A.; Serna, G.; Sevigny, A.; Shaddock, D. A.; Shaffery, P.; Shah, S.; Shahriar, M. S.; Shaltev, M.; Shao, Z.; Shapiro, B.; Shawhan, P.; Shoemaker, D. H.; Sidery, T. L.; Siellez, K.; Siemens, X.; Sigg, D.; Silva, A. D.; Simakov, D.; Singer, A.; Singer, L. P.; Singh, R.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, J. R.; Smith, N. D.; Smith, R. J. E.; Son, E. J.; Sorazu, B.; Souradeep, T.; Srivastava, A. K.; Staley, A.; Stebbins, J.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Steplewski, S.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Strauss, N. A.; Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sutton, P. J.; Swinkels, B. L.; Szczepanczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Tarabrin, S. P.; Taracchini, A.; Taylor, R.; Theeg, T.; Thirugnanasambandam, M. P.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thorne, K. S.; Thrane, E.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Tomlinson, C.; Tonelli, M.; Torres, C. V.; Torrie, C. I.; Travasso, F.; Traylor, G.; Trifirò, D.; Tringali, M. C.; Tse, M.; Turconi, M.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; Vallisneri, M.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; van den Broeck, C.; van der Schaaf, L.; van der Sluys, M. V.; van Heijningen, J.; van Veggel, A. A.; Vansuch, G.; Vardaro, M.; Vass, S.; Vasúth, M.; Vaulin, R.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Verkindt, D.; Vetrano, F.; Viceré, A.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; Wade, M.; Wade, L. E.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, X.; Ward, R. L.; Warner, J.; Was, M.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Welborn, T.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; White, D. J.; Whiting, B. F.; Williams, K. J.; Williams, L.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Worden, J.; Yablon, J.; Yakushin, I.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yvert, M.; ZadroŻny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zhang, Fan; Zhang, L.; Zhang, M.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhu, X. J.; Zucker, M. E.; Zuraw, S. E.; Zweizig, J.

    2016-02-01

    We report results of a wideband search for periodic gravitational waves from isolated neutron stars within the Orion spur towards both the inner and outer regions of our Galaxy. As gravitational waves interact very weakly with matter, the search is unimpeded by dust and concentrations of stars. One search disk (A) is 6.87° in diameter and centered on 2 0h1 0m54.7 1s+3 3 ° 3 3'25.2 9'' , and the other (B) is 7.45° in diameter and centered on 8h3 5m20.6 1s-4 6 ° 4 9'25.15 1''. We explored the frequency range of 50-1500 Hz and frequency derivative from 0 to -5 ×10-9 Hz /s . A multistage, loosely coherent search program allowed probing more deeply than before in these two regions, while increasing coherence length with every stage. Rigorous follow-up parameters have winnowed the initial coincidence set to only 70 candidates, to be examined manually. None of those 70 candidates proved to be consistent with an isolated gravitational-wave emitter, and 95% confidence level upper limits were placed on continuous-wave strain amplitudes. Near 169 Hz we achieve our lowest 95% C.L. upper limit on the worst-case linearly polarized strain amplitude h0 of 6.3 ×10-25, while at the high end of our frequency range we achieve a worst-case upper limit of 3.4 ×10-24 for all polarizations and sky locations.

  4. Improvement of the performances of the genetic algorithms by using an adaptive search space reduction and the transformation

    NASA Astrophysics Data System (ADS)

    Yousfi, L.; Mansouri, N.

    2008-06-01

    The aim of this paper is the identification of the parameters in systems modeled by nonlinear differential equations. The proposed method is based on Genetic algorithms with domain's reduction and transformation strategies. The studied problems are successively solved using transformation technique, domain's reduction and a combination of the two strategies. The results obtained, using all these methods are comparables. The good results obtained by transformation seem to be related to the great degree of diversity that the mechanism introduces in population.

  5. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms

    PubMed Central

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M.K.

    2015-01-01

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution. PMID:26110406

  6. Solving Energy-Aware Real-Time Tasks Scheduling Problem with Shuffled Frog Leaping Algorithm on Heterogeneous Platforms.

    PubMed

    Zhang, Weizhe; Bai, Enci; He, Hui; Cheng, Albert M K

    2015-06-11

    Reducing energy consumption is becoming very important in order to keep battery life and lower overall operational costs for heterogeneous real-time multiprocessor systems. In this paper, we first formulate this as a combinatorial optimization problem. Then, a successful meta-heuristic, called Shuffled Frog Leaping Algorithm (SFLA) is proposed to reduce the energy consumption. Precocity remission and local optimal avoidance techniques are proposed to avoid the precocity and improve the solution quality. Convergence acceleration significantly reduces the search time. Experimental results show that the SFLA-based energy-aware meta-heuristic uses 30% less energy than the Ant Colony Optimization (ACO) algorithm, and 60% less energy than the Genetic Algorithm (GA) algorithm. Remarkably, the running time of the SFLA-based meta-heuristic is 20 and 200 times less than ACO and GA, respectively, for finding the optimal solution.

  7. A swarm intelligence based memetic algorithm for task allocation in distributed systems

    NASA Astrophysics Data System (ADS)

    Sarvizadeh, Raheleh; Haghi Kashani, Mostafa

    2011-12-01

    This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.

  8. A swarm intelligence based memetic algorithm for task allocation in distributed systems

    NASA Astrophysics Data System (ADS)

    Sarvizadeh, Raheleh; Haghi Kashani, Mostafa

    2012-01-01

    This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these existing approaches are going to scan the entire solution space without considering the techniques that can reduce the complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based method in terms of CPU utilization.

  9. comets (Constrained Optimization of Multistate Energies by Tree Search): A Provable and Efficient Protein Design Algorithm to Optimize Binding Affinity and Specificity with Respect to Sequence.

    PubMed

    Hallen, Mark A; Donald, Bruce R

    2016-05-01

    Practical protein design problems require designing sequences with a combination of affinity, stability, and specificity requirements. Multistate protein design algorithms model multiple structural or binding "states" of a protein to address these requirements. comets provides a new level of versatile, efficient, and provable multistate design. It provably returns the minimum with respect to sequence of any desired linear combination of the energies of multiple protein states, subject to constraints on other linear combinations. Thus, it can target nearly any combination of affinity (to one or multiple ligands), specificity, and stability (for multiple states if needed). Empirical calculations on 52 protein design problems showed comets is far more efficient than the previous state of the art for provable multistate design (exhaustive search over sequences). comets can handle a very wide range of protein flexibility and can enumerate a gap-free list of the best constraint-satisfying sequences in order of objective function value. PMID:26761641

  10. A hybrid color space for skin detection using genetic algorithm heuristic search and principal component analysis technique.

    PubMed

    Maktabdar Oghaz, Mahdi; Maarof, Mohd Aizaini; Zainal, Anazida; Rohani, Mohd Foad; Yaghoubyan, S Hadi

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  11. A hybrid color space for skin detection using genetic algorithm heuristic search and principal component analysis technique.

    PubMed

    Maktabdar Oghaz, Mahdi; Maarof, Mohd Aizaini; Zainal, Anazida; Rohani, Mohd Foad; Yaghoubyan, S Hadi

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications.

  12. A Hybrid Color Space for Skin Detection Using Genetic Algorithm Heuristic Search and Principal Component Analysis Technique

    PubMed Central

    2015-01-01

    Color is one of the most prominent features of an image and used in many skin and face detection applications. Color space transformation is widely used by researchers to improve face and skin detection performance. Despite the substantial research efforts in this area, choosing a proper color space in terms of skin and face classification performance which can address issues like illumination variations, various camera characteristics and diversity in skin color tones has remained an open issue. This research proposes a new three-dimensional hybrid color space termed SKN by employing the Genetic Algorithm heuristic and Principal Component Analysis to find the optimal representation of human skin color in over seventeen existing color spaces. Genetic Algorithm heuristic is used to find the optimal color component combination setup in terms of skin detection accuracy while the Principal Component Analysis projects the optimal Genetic Algorithm solution to a less complex dimension. Pixel wise skin detection was used to evaluate the performance of the proposed color space. We have employed four classifiers including Random Forest, Naïve Bayes, Support Vector Machine and Multilayer Perceptron in order to generate the human skin color predictive model. The proposed color space was compared to some existing color spaces and shows superior results in terms of pixel-wise skin detection accuracy. Experimental results show that by using Random Forest classifier, the proposed SKN color space obtained an average F-score and True Positive Rate of 0.953 and False Positive Rate of 0.0482 which outperformed the existing color spaces in terms of pixel wise skin detection accuracy. The results also indicate that among the classifiers used in this study, Random Forest is the most suitable classifier for pixel wise skin detection applications. PMID:26267377

  13. Active Solution Space and Search on Job-shop Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo

    In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.

  14. An investigation of messy genetic algorithms

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Deb, Kalyanmoy; Korb, Bradley

    1990-01-01

    Genetic algorithms (GAs) are search procedures based on the mechanics of natural selection and natural genetics. They combine the use of string codings or artificial chromosomes and populations with the selective and juxtapositional power of reproduction and recombination to motivate a surprisingly powerful search heuristic in many problems. Despite their empirical success, there has been a long standing objection to the use of GAs in arbitrarily difficult problems. A new approach was launched. Results to a 30-bit, order-three-deception problem were obtained using a new type of genetic algorithm called a messy genetic algorithm (mGAs). Messy genetic algorithms combine the use of variable-length strings, a two-phase selection scheme, and messy genetic operators to effect a solution to the fixed-coding problem of standard simple GAs. The results of the study of mGAs in problems with nonuniform subfunction scale and size are presented. The mGA approach is summarized, both its operation and the theory of its use. Experiments on problems of varying scale, varying building-block size, and combined varying scale and size are presented.

  15. Automatic 3D image registration using voxel similarity measurements based on a genetic algorithm

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Sullivan, John M., Jr.; Kulkarni, Praveen; Murugavel, Murali

    2006-03-01

    An automatic 3D non-rigid body registration system based upon the genetic algorithm (GA) process is presented. The system has been successfully applied to 2D and 3D situations using both rigid-body and affine transformations. Conventional optimization techniques and gradient search strategies generally require a good initial start location. The GA approach avoids the local minima/maxima traps of conventional optimization techniques. Based on the principles of Darwinian natural selection (survival of the fittest), the genetic algorithm has two basic steps: 1. Randomly generate an initial population. 2. Repeated application of the natural selection operation until a termination measure is satisfied. The natural selection process selects individuals based on their fitness to participate in the genetic operations; and it creates new individuals by inheritance from both parents, genetic recombination (crossover) and mutation. Once the termination criteria are satisfied, the optimum is selected from the population. The algorithm was applied on 2D and 3D magnetic resonance images (MRI). It does not require any preprocessing such as threshold, smoothing, segmentation, or definition of base points or edges. To evaluate the performance of the GA registration, the results were compared with results of the Automatic Image Registration technique (AIR) and manual registration which was used as the gold standard. Results showed that our GA implementation was a robust algorithm and gives very close results to the gold standard. A pre-cropping strategy was also discussed as an efficient preprocessing step to enhance the registration accuracy.

  16. Null Steering of Adaptive Beamforming Using Linear Constraint Minimum Variance Assisted by Particle Swarm Optimization, Dynamic Mutated Artificial Immune System, and Gravitational Search Algorithm

    PubMed Central

    Sieh Kiong, Tiong; Tariqul Islam, Mohammad; Ismail, Mahamod; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program. PMID:25147859

  17. Null steering of adaptive beamforming using linear constraint minimum variance assisted by particle swarm optimization, dynamic mutated artificial immune system, and gravitational search algorithm.

    PubMed

    Darzi, Soodabeh; Kiong, Tiong Sieh; Islam, Mohammad Tariqul; Ismail, Mahamod; Kibria, Salehin; Salem, Balasem

    2014-01-01

    Linear constraint minimum variance (LCMV) is one of the adaptive beamforming techniques that is commonly applied to cancel interfering signals and steer or produce a strong beam to the desired signal through its computed weight vectors. However, weights computed by LCMV usually are not able to form the radiation beam towards the target user precisely and not good enough to reduce the interference by placing null at the interference sources. It is difficult to improve and optimize the LCMV beamforming technique through conventional empirical approach. To provide a solution to this problem, artificial intelligence (AI) technique is explored in order to enhance the LCMV beamforming ability. In this paper, particle swarm optimization (PSO), dynamic mutated artificial immune system (DM-AIS), and gravitational search algorithm (GSA) are incorporated into the existing LCMV technique in order to improve the weights of LCMV. The simulation result demonstrates that received signal to interference and noise ratio (SINR) of target user can be significantly improved by the integration of PSO, DM-AIS, and GSA in LCMV through the suppression of interference in undesired direction. Furthermore, the proposed GSA can be applied as a more effective technique in LCMV beamforming optimization as compared to the PSO technique. The algorithms were implemented using Matlab program.

  18. Toward Developing Genetic Algorithms to Aid in Critical Infrastructure Modeling

    SciTech Connect

    Not Available

    2007-05-01

    Today’s society relies upon an array of complex national and international infrastructure networks such as transportation, telecommunication, financial and energy. Understanding these interdependencies is necessary in order to protect our critical infrastructure. The Critical Infrastructure Modeling System, CIMS©, examines the interrelationships between infrastructure networks. CIMS© development is sponsored by the National Security Division at the Idaho National Laboratory (INL) in its ongoing mission for providing critical infrastructure protection and preparedness. A genetic algorithm (GA) is an optimization technique based on Darwin’s theory of evolution. A GA can be coupled with CIMS© to search for optimum ways to protect infrastructure assets. This includes identifying optimum assets to enforce or protect, testing the addition of or change to infrastructure before implementation, or finding the optimum response to an emergency for response planning. This paper describes the addition of a GA to infrastructure modeling for infrastructure planning. It first introduces the CIMS© infrastructure modeling software used as the modeling engine to support the GA. Next, the GA techniques and parameters are defined. Then a test scenario illustrates the integration with CIMS© and the preliminary results.

  19. Interfacial oxygen under TiO{sub 2} supported Au clusters revealed by a genetic algorithm search

    SciTech Connect

    Vilhelmsen, Lasse B.; Hammer, Bjørk

    2013-11-28

    We present a density functional theory study of the oxidation of 1D periodic rods supported along the [001] direction on the rutile TiO{sub 2}(110) surface. The study shows evidence for an oxidation of the interface between the supported Au and the TiO{sub 2} crystal. The added O atoms adsorb at the 5f-Ti atoms in the through under the Au rod and are stabilized by charge transfer from the nearest Au atoms. Despite an extensive search, we find no low energy barrier pathways for CO oxidation involving CO adsorbed on Au and O at the perimeter of the Au/TiO{sub 2} interface. This is in part attributed the weak adsorption of CO on cationic Au at the perimeter.

  20. In Search of Archean Biomarkers: Re-analysis of 2.7 Ga Metasediments from the Abitibi Greenstone Belt, Ontario, Canada.

    NASA Astrophysics Data System (ADS)

    Pasterski, M. J.; Kenig, F. P. H.; Ventura, G. T.; Hanley, L.; Barry, G.

    2015-12-01

    Biomarkers in Archean sediments are now generally considered contaminants, either incorporated into the sediments after deposition or introduced during coring, sample collection, or laboratory analysis. Not all previously studied Archean formations have yet been re-analyzed using techniques that take a more stringent approach to account for contamination. Here we re-analyze 2.676 - 2.703 billion-year-old (Ga) lower greenschist metasediments collected from the Abitibi Greenstone Belt in Ontario Canada to determine if previously observed biomarkers are actually syndepositional or artifacts. One method to assure the syngeneity of biomarkers with their host rock is to study hydrocarbons trapped within carbon-rich inclusions of specific mineral phases. We analyzed observed carbon-rich inclusions within the Abitibi metasediments to either verify or nullify the results of Ventura et al. 2007, which found evidence for archaeal, bacterial, and eukaryotic life present during primary deposition, and possible archaeal and bacterial life present during post deposition subsurface hydrothermal activity. 16 samples collected from 8 locations within the Tisdale and Porcupine Groups in Timmins, ON, Canada, were made into 75 - 100 μm thick slides, photographed, and petrographically analyzed for carbon-rich inclusions. Carbon-rich inclusions up to 10 - 20 μm in diameter were positively identified within 2.703 Ga greywackes. The 10 - 20 μm inclusions occur within the interstices of occluded quartz grains and within secondary mineral phases such as large (100 - 300 μm) ankerite grains which formed during the precipitation of minerals from hydrothermal fluid at 2.670 ± .007 Ga. The next step in analyzing these inclusions is to use full spectrum-Laser Desorption Postionization-Mass Spectrometry (fs-LDPI-MS), which has the capability of analyzing samples to the 2 μm scale to determine the hydrocarbon composition of the carbon-rich inclusions.

  1. GUESS-ing Polygenic Associations with Multiple Phenotypes Using a GPU-Based Evolutionary Stochastic Search Algorithm

    PubMed Central

    Hastie, David I.; Zeller, Tanja; Liquet, Benoit; Newcombe, Paul; Yengo, Loic; Wild, Philipp S.; Schillert, Arne; Ziegler, Andreas; Nielsen, Sune F.; Butterworth, Adam S.; Ho, Weang Kee; Castagné, Raphaële; Munzel, Thomas; Tregouet, David; Falchi, Mario; Cambien, François; Nordestgaard, Børge G.; Fumeron, Fredéric; Tybjærg-Hansen, Anne; Froguel, Philippe; Danesh, John; Petretto, Enrico; Blankenberg, Stefan; Tiret, Laurence; Richardson, Sylvia

    2013-01-01

    Genome-wide association studies (GWAS) yielded significant advances in defining the genetic architecture of complex traits and disease. Still, a major hurdle of GWAS is narrowing down multiple genetic associations to a few causal variants for functional studies. This becomes critical in multi-phenotype GWAS where detection and interpretability of complex SNP(s)-trait(s) associations are complicated by complex Linkage Disequilibrium patterns between SNPs and correlation between traits. Here we propose a computationally efficient algorithm (GUESS) to explore complex genetic-association models and maximize genetic variant detection. We integrated our algorithm with a new Bayesian strategy for multi-phenotype analysis to identify the specific contribution of each SNP to different trait combinations and study genetic regulation of lipid metabolism in the Gutenberg Health Study (GHS). Despite the relatively small size of GHS (n = 3,175), when compared with the largest published meta-GWAS (n>100,000), GUESS recovered most of the major associations and was better at refining multi-trait associations than alternative methods. Amongst the new findings provided by GUESS, we revealed a strong association of SORT1 with TG-APOB and LIPC with TG-HDL phenotypic groups, which were overlooked in the larger meta-GWAS and not revealed by competing approaches, associations that we replicated in two independent cohorts. Moreover, we demonstrated the increased power of GUESS over alternative multi-phenotype approaches, both Bayesian and non-Bayesian, in a simulation study that mimics real-case scenarios. We showed that our parallel implementation based on Graphics Processing Units outperforms alternative multi-phenotype methods. Beyond multivariate modelling of multi-phenotypes, our Bayesian model employs a flexible hierarchical prior structure for genetic effects that adapts to any correlation structure of the predictors and increases the power to identify associated variants. This

  2. Artificial immune algorithm for multi-depot vehicle scheduling problems

    NASA Astrophysics Data System (ADS)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  3. Using Search Algorithms and Probabilistic Graphical Models to Understand the Influence of Atmospheric Circulation on Western US Drought

    NASA Astrophysics Data System (ADS)

    Malevich, S. B.; Woodhouse, C. A.

    2015-12-01

    This work explores a new approach to quantify cool-season mid-latitude circulation dynamics as they relate western US streamflow variability and drought. This information is used to probabilistically associate patterns of synoptic atmospheric circulation with spatial patterns of drought in western US streamflow. Cool-season storms transport moisture from the Pacific Ocean and are a primary source for western US streamflow. Studies overthe past several decades have emphasized that the western US hydroclimate is influenced by the intensity and phasing of ocean and atmosphere dynamics and teleconnections, such as ENSO and North Pacific variability. These complex interactions are realized in atmospheric circulation along the west coast of North America. The region's atmospheric circulation can encourage a preferential flow in winter storm tracks from the Pacific, and thus influence the moisture conditions of a given river basin over the course of the cool season. These dynamics have traditionally been measured with atmospheric indices based on values from fixed points in space or principal component loadings. This study uses collective search agents to quantify the position and intensity of potentially non-stationary atmosphere features in climate reanalysis datasets, relative to regional hydrology. Results underline the spatio-temporal relationship between semi-permanent atmosphere characteristics and naturalized streamflow from major river basins of the western US. A probabilistic graphical model quantifies this relationship while accounting for uncertainty from noisy climate processes, and eventually, limitations from dataset length. This creates probabilities for semi-permanent atmosphere features which we hope to associate with extreme droughts of the paleo record, based on our understanding of atmosphere-streamflow relations observed in the instrumental record.

  4. Varied Definitions of Nasolabial Angle: Searching for Consensus Among Rhinoplasty Surgeons and an Algorithm for Selecting the Ideal Method

    PubMed Central

    Harris, Ryan; Nagarkar, Purushottam

    2016-01-01

    Background: The nasolabial angle (NLA) is an important aesthetic metric for nasal assessment and correction. Although the literature offers many definitions, none has garnered universal acceptance. Methods: To gauge the consensus level among practitioners, surveys were administered to a convenience sample of rhinoplasty surgeons soliciting practice characteristics, self-assessment of rhinoplasty experience and expertise, and preferred NLA definition. Choices of NLA definition included the angle between: (A) columella and line intersecting subnasale and labrale superius; (B) columella and line tangent to philtrum; (C) nostril long axis and Frankfort perpendicular; and (D) nostril long axis and vertical facial plane. Results: Of the 82 total respondents, mean age was 50 years (range, 30–80years), and mean professional experience was 17 years (range, 0–67 years). Nineteen described themselves as novice rhinoplasty surgeons, 27 as intermediates, and 36 as experts. Mean number of lifetime rhinoplasties performed was 966 (range, 0–10,000). Twenty respondents (24%) agreed with definition A, 27 (33%) with B, 16 (20%) with C, and 13 (16%) with D. Six chose “other,” offering their own explanations of NLA. Self-identified novices were more likely to prefer definition D than were experts (P = 0.009). Conclusions: No majority consensus was reached regarding the definition of NLA. Each method has its benefits and drawbacks, and establishing a single one may be unnecessary and even counterproductive in some cases. Having options available means that surgeons can tailor to each encounter, as long as they adopt a systematic methodology. We submit an algorithm to facilitate this effort. PMID:27482491

  5. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    NASA Astrophysics Data System (ADS)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  6. Virus evolutionary genetic algorithm for task collaboration of logistics distribution

    NASA Astrophysics Data System (ADS)

    Ning, Fanghua; Chen, Zichen; Xiong, Li

    2005-12-01

    In order to achieve JIT (Just-In-Time) level and clients' maximum satisfaction in logistics collaboration, a Virus Evolutionary Genetic Algorithm (VEGA) was put forward under double constraints of logistics resource and operation sequence. Based on mathematic description of a multiple objective function, the algorithm was designed to schedule logistics tasks with different due dates and allocate them to network members. By introducing a penalty item, make span and customers' satisfaction were expressed in fitness function. And a dynamic adaptive probability of infection was used to improve performance of local search. Compared to standard Genetic Algorithm (GA), experimental result illustrates the performance superiority of VEGA. So the VEGA can provide a powerful decision-making technique for optimizing resource configuration in logistics network.

  7. An Evaluation of Potentials of Genetic Algorithm in Shortest Path Problem

    NASA Astrophysics Data System (ADS)

    Hassany Pazooky, S.; Rahmatollahi Namin, Sh; Soleymani, A.; Samadzadegan, F.

    2009-04-01

    One of the most typical issues considered in combinatorial systems in transportation networks, is the shortest path problem. In such networks, routing has a significant impact on the network's performance. Due to natural complexity in transportation networks and strong impact of routing in different fields of decision making, such as traffic management and vehicle routing problem (VRP), appropriate solutions to solve this problem are crucial to be determined. During last years, in order to solve the shortest path problem, different solutions are proposed. These techniques are divided into two categories of classic and evolutionary approaches. Two well-known classic algorithms are Dijkstra and A*. Dijkstra is known as a robust, but time consuming algorithm in finding the shortest path problem. A* is also another algorithm very similar to Dijkstra, less robust but with a higher performance. On the other hand, Genetic algorithms are introduced as most applicable evolutionary algorithms. Genetic Algorithm uses a parallel search method in several parts of the domain and is not trapped in local optimums. In this paper, the potentiality of Genetic algorithm for finding the shortest path is evaluated by making a comparison between this algorithm and classic algorithms (Dijkstra and A*). Evaluation of the potential of these techniques on a transportation network in an urban area shows that due to the problem of classic methods in their small search space, GA had a better performance in finding the shortest path.

  8. A genetic algorithm for slope stability analyses with concave slip surfaces using custom operators

    NASA Astrophysics Data System (ADS)

    Jurado-Piña, Rafael; Jimenez, Rafael

    2015-04-01

    Heuristic methods are popular tools to find critical slip surfaces in slope stability analyses. A new genetic algorithm (GA) is proposed in this work that has a standard structure but a novel encoding and generation of individuals with custom-designed operators for mutation and crossover that produce kinematically feasible slip surfaces with a high probability. In addition, new indices to assess the efficiency of operators in their search for the minimum factor of safety (FS) are proposed. The proposed GA is applied to traditional benchmark examples from the literature, as well as to a new practical example. Results show that the proposed GA is reliable, flexible and robust: it provides good minimum FS estimates that are not very sensitive to the number of nodes and that are very similar for different replications.

  9. Integrated fuzzy logic and genetic algorithms for multi-objective control of structures using MR dampers

    NASA Astrophysics Data System (ADS)

    Yan, Gang; Zhou, Lily L.

    2006-09-01

    This study presents a design strategy based on genetic algorithms (GA) for semi-active fuzzy control of structures that have magnetorheological (MR) dampers installed to prevent damage from severe dynamic loads such as earthquakes. The control objective is to minimize both the maximum displacement and acceleration responses of the structure. Interactive relationships between structural responses and input voltages of MR dampers are established by using a fuzzy controller. GA is employed as an adaptive method for design of the fuzzy controller, which is here known as a genetic adaptive fuzzy (GAF) controller. The multi-objectives are first converted to a fitness function that is used in standard genetic operations, i.e. selection, crossover, and mutation. The proposed approach generates an effective and reliable fuzzy logic control system by powerful searching and self-learning adaptive capabilities of GA. Numerical simulations for single and multiple damper cases are given to show the effectiveness and efficiency of the proposed intelligent control strategy.

  10. Fuzzy logic and genetic algorithms for intelligent control of structures using MR dampers

    NASA Astrophysics Data System (ADS)

    Yan, Gang; Zhou, Lily L.

    2004-07-01

    Fuzzy logic control (FLC) and genetic algorithms (GA) are integrated into a new approach for the semi-active control of structures installed with MR dampers against severe dynamic loadings such as earthquakes. The interactive relationship between the structural response and the input voltage of MR dampers is established by using a fuzzy controller rather than the traditional way by introducing an ideal active control force. GA is employed as an adaptive method for optimization of parameters and for selection of fuzzy rules of the fuzzy control system, respectively. The maximum structural displacement is selected and used as the objective function to be minimized. The objective function is then converted to a fitness function to form the basis of genetic operations, i.e. selection, crossover, and mutation. The proposed integrated architecture is expected to generate an effective and reliable fuzzy control system by GA"s powerful searching and self-learning adaptive capability.

  11. Determining hypocentral parameters for local earthquakes under ill conditions using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Kim, Woohan; Hahm, In-Kyeong; Kim, Won-Young; Lee, Jung Mo

    2010-10-01

    We demonstrate that GA-MHYPO determines accurate hypocentral parameters for local earthquakes under ill conditions, such as limited number of stations (phase data), large azimuthal gap, and noisy data. The genetic algorithm (GA) in GA-MHYPO searches for the optimal 1-D velocity structure which provides the minimum traveltime differences between observed (true) and calculated P and S arrivals within prescribed ranges. GA-MHYPO is able to determine hypocentral parameters more accurately in many circumstances than conventional methods which rely on an a priori (and possibly incorrect) 1-D velocity model. In our synthetic tests, the accuracy of hypocentral parameters obtained by GA-MHYPO given ill conditions is improved by more than a factor of 20 for error-free data, and by a factor of five for data with errors, compared to that obtained by conventional methods such as HYPOINVERSE. In the case of error-free data, GA-MHYPO yields less than 0.1 km errors in focal depths and hypocenters without strong dependence on azimuthal coverage up to 45°. Errors are less than 1 km for data with errors of a 0.1-s standard deviation. To test the performance using real data, a well-recorded earthquake in the New Madrid seismic zone and earthquakes recorded under ill conditions in the High Himalaya are relocated by GA-MHYPO. The hypocentral parameters determined by GA-MHYPO under both good and ill conditions show similar computational results, which suggest that GA-MHYPO is robust and yields more reliable hypocentral parameters than standard methods under ill conditions for natural earthquakes.

  12. Experimental design for estimating unknown hydraulic conductivity in an aquifer using a genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2015-12-01

    We develop an experimental design algorithm to select locations for a network of observation wells that provide the maximum robust information about unknown hydraulic conductivity in a confined, anisotropic aquifer. Since the information that a design provides is dependent on an aquifer's hydraulic conductivity, a robust design is one that provides the maximum information in the worst-case scenario. The design can be formulated as a max-min optimization problem. The problem is generally non-convex, non-differentiable, and contains integer variables. We use a Genetic Algorithm (GA) to perform the combinatorial search. We employ proper orthogonal decomposition (POD) to reduce the dimension of the groundwater model, thereby reducing the computational burden posed by employing a GA. The GA algorithm exhaustively searches for the robust design across a set of hydraulic conductivities and finds an approximate design (called the High Frequency Observation Well Design) through a Monte Carlo-type search. The results from a small-scale 1-D test case validate the proposed methodology. We then apply the methodology to a realistically-scaled 2-D test case.

  13. Guiding rational reservoir flood operation using penalty-type genetic algorithm

    NASA Astrophysics Data System (ADS)

    Chang, Li-Chiu

    2008-06-01

    SummaryReal-time flood control of a multi-purpose reservoir should consider decreasing the flood peak stage downstream and storing floodwaters for future usage during typhoon seasons. This study proposes a reservoir flood control optimization model with linguistic description of requirements and existing regulations for rational operating decisions. The approach involves formulating reservoir flood operation as an optimization problem and using the genetic algorithm (GA) as a search engine. The optimizing formulation is expressed not only by mathematical forms of objective function and constraints, but also by no analytic expression in terms of parameters. GA is used to search a global optimum of a mixture of mathematical and nonmathematical formulations. Due to the great number of constraints and flood control requirements, it is difficult to reach a solution without violating constraints. To tackle this bottleneck, the proper penalty strategy for each parameter is proposed to guide the GA searching process. The proposed approach is applied to the Shihmen reservoir in North Taiwan for finding the rational release and desired storage as a case study. The hourly historical data sets of 29 typhoon events that have hit the area in last thirty years are investigated bye the proposed method. To demonstrate the effectiveness of the proposed approach, the simplex method was performed. The results demonstrated that a penalty-type genetic algorithm could effectively provide rational hydrographs to reduce flood damage during the flood operation and to increase final storage for future usages.

  14. Searching with probabilities

    SciTech Connect

    Palay, A.J.

    1985-01-01

    This book examines how probability distributions can be used as a knowledge representation technique. It presents a mechanism that can be used to guide a selective search algorithm to solve a variety of tactical chess problems. Topics covered include probabilities and searching the B algorithm and chess probabilities - in practice, examples, results, and future work.

  15. Optimal groundwater remediation using artificial neural networks and the genetic algorithm

    SciTech Connect

    Rogers, L.L.

    1992-08-01

    An innovative computational approach for the optimization of groundwater remediation is presented which uses artificial neural networks (ANNs) and the genetic algorithm (GA). In this approach, the ANN is trained to predict an aspect of the outcome of a flow and transport simulation. Then the GA searches through realizations or patterns of pumping and uses the trained network to predict the outcome of the realizations. This approach has advantages of parallel processing of the groundwater simulations and the ability to ``recycle`` or reuse the base of knowledge formed by these simulations. These advantages offer reduction of computational burden of the groundwater simulations relative to a more conventional approach which uses nonlinear programming (NLP) with a quasi-newtonian search. Also the modular nature of this approach facilitates substitution of different groundwater simulation models.

  16. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  17. Genetic algorithm approach for adaptive power and subcarrier allocation in multi-user OFDM systems

    NASA Astrophysics Data System (ADS)

    Reddy, Y. B.; Naraghi-Pour, Mort

    2007-04-01

    In this paper, a novel genetic algorithm application is proposed for adaptive power and subcarrier allocation in multi-user Orthogonal Frequency Division Multiplexing (OFDM) systems. To test the application, a simple genetic algorithm was implemented in MATLAB language. With the goal of minimizing the overall transmit power while ensuring the fulfillment of each user's rate and bit error rate (BER) requirements, the proposed algorithm acquires the needed allocation through genetic search. The simulations were tested for BER 0.1 to 0.00001, data rate of 256 bit per OFDM block and chromosome length of 128. The results show that genetic algorithm outperforms the results in [3] in subcarrier allocation. The convergence of GA model with 8 users and 128 subcarriers performs better in power requirement compared to that in [4] but converges more slowly.

  18. Identification of a Robust Lichen Index for the Deconvolution of Lichen and Rock Mixtures Using Pattern Search Algorithm (case Study: Greenland)

    NASA Astrophysics Data System (ADS)

    Salehi, S.; Karami, M.; Fensholt, R.

    2016-06-01

    Lichens are the dominant autotrophs of polar and subpolar ecosystems commonly encrust the rock outcrops. Spectral mixing of lichens and bare rock can shift diagnostic spectral features of materials of interest thus leading to misinterpretation and false positives if mapping is done based on perfect spectral matching methodologies. Therefore, the ability to distinguish the lichen coverage from rock and decomposing a mixed pixel into a collection of pure reflectance spectra, can improve the applicability of hyperspectral methods for mineral exploration. The objective of this study is to propose a robust lichen index that can be used to estimate lichen coverage, regardless of the mineral composition of the underlying rocks. The performance of three index structures of ratio, normalized ratio and subtraction have been investigated using synthetic linear mixtures of pure rock and lichen spectra with prescribed mixing ratios. Laboratory spectroscopic data are obtained from lichen covered samples collected from Karrat, Liverpool Land, and Sisimiut regions in Greenland. The spectra are then resampled to Hyperspectral Mapper (HyMAP) resolution, in order to further investigate the functionality of the indices for the airborne platform. In both resolutions, a Pattern Search (PS) algorithm is used to identify the optimal band wavelengths and bandwidths for the lichen index. The results of our band optimization procedure revealed that the ratio between R894-1246 and R1110 explains most of the variability in the hyperspectral data at the original laboratory resolution (R2=0.769). However, the normalized index incorporating R1106-1121 and R904-1251 yields the best results for the HyMAP resolution (R2=0.765).

  19. Improved limited discrepancy search

    SciTech Connect

    Korf, R.E.

    1996-12-31

    We present an improvement to Harvey and Ginsberg`s limited discrepancy search algorithm, which eliminates much of the redundancy in the original, by generating each path from the root to the maximum search depth only once. For a complete binary tree of depth d this reduces the asymptotic complexity from O(d+2/2 2{sup d}) to O(2{sup d}). The savings is much less in a partial tree search, or in a heavily pruned tree. The overhead of the improved algorithm on a complete binary tree is only a factor of b/(b - 1) compared to depth-first search. While this constant factor is greater on a heavily pruned tree, this improvement makes limited discrepancy search a viable alternative to depth-first search, whenever the entire tree may not be searched. Finally, we present both positive and negative empirical results on the utility of limited discrepancy search, for the problem of number partitioning.

  20. Particle Swarm Optimization Method Based on Chaotic Local Search and Roulette Wheel Mechanism

    NASA Astrophysics Data System (ADS)

    Xia, Xiaohua

    Combining the particle swarm optimization (PSO) technique with the chaotic local search (CLS) and roulette wheel mechanism (RWM), an efficient optimization method solving the constrained nonlinear optimization problems is presented in this paper. PSO can be viewed as the global optimizer while the CLS and RWM are employed for the local search. Thus, the possibility of exploring a global minimum in problems with many local optima is increased. The search will continue until a termination criterion is satisfied. Benefit from the fast globally converging characteristics of PSO and the effective local search ability of CLS and RWM, the proposed method can obtain the global optimal results quickly which was tested for six benchmark optimization problems. And the improved performance comparing with the standard PSO and genetic algorithm (GA) testified its validity.