Sample records for parameterized np-hard problems

  1. Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.

    PubMed

    Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian

    2017-01-01

    Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.

  2. Alternative Parameterizations for Cluster Editing

    NASA Astrophysics Data System (ADS)

    Komusiewicz, Christian; Uhlmann, Johannes

    Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.

  3. Exploiting Bounded Signal Flow for Graph Orientation Based on Cause-Effect Pairs

    NASA Astrophysics Data System (ADS)

    Dorn, Britta; Hüffner, Falk; Krüger, Dominikus; Niedermeier, Rolf; Uhlmann, Johannes

    We consider the following problem: Given an undirected network and a set of sender-receiver pairs, direct all edges such that the maximum number of "signal flows" defined by the pairs can be routed respecting edge directions. This problem has applications in communication networks and in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. For many relevant cases, the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies.

  4. On Making a Distinguished Vertex Minimum Degree by Vertex Deletion

    NASA Astrophysics Data System (ADS)

    Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf; Uhlmann, Johannes

    For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. Continuing previous work for the directed case, we show that the problem is W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining the parameters "feedback vertex set number" and "number of vertices to delete". For the so far unstudied undirected case, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability for several parameterizations measuring tree-likeness, including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding nonexistence results when replacing vertex cover number by treewidth or feedback vertex set number.

  5. Parameterized Complexity Results for General Factors in Bipartite Graphs with an Application to Constraint Programming

    NASA Astrophysics Data System (ADS)

    Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders

    The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.

  6. Better approximation guarantees for job-shop scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, L.A.; Paterson, M.; Srinivasan, A.

    1997-06-01

    Job-shop scheduling is a classical NP-hard problem. Shmoys, Stein & Wein presented the first polynomial-time approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work, and present further improvements for some important NP-hard special cases of this problem (e.g., in the preemptive case where machines can suspend work on operations and later resume). We also present NC algorithms with improved approximation guarantees for some NP-hard special cases.

  7. On the Complexity of Delaying an Adversary’s Project

    DTIC Science & Technology

    2005-01-01

    interdiction models for such problems and show that the resulting problem com- plexities run the gamut : polynomially solvable, weakly NP-complete, strongly...problems and show that the resulting problem complexities run the gamut : polynomially solvable, weakly NP-complete, strongly NP-complete or NP-hard. We

  8. Phase Transitions in Planning Problems: Design and Analysis of Parameterized Families of Hard Planning Problems

    NASA Technical Reports Server (NTRS)

    Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide

    2014-01-01

    There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning problems to QUBOs, the form of input required for a quantum annealing machine such as the D-Wave II.

  9. Solving NP-Hard Problems with Physarum-Based Ant Colony System.

    PubMed

    Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li

    2017-01-01

    NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.

  10. Parameterized Complexity of k-Anonymity: Hardness and Tractability

    NASA Astrophysics Data System (ADS)

    Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri

    The problem of publishing personal data without giving up privacy is becoming increasingly important. A precise formalization that has been recently proposed is the k-anonymity, where the rows of a table are partitioned in clusters of size at least k and all rows in a cluster become the same tuple after the suppression of some entries. The natural optimization problem, where the goal is to minimize the number of suppressed entries, is hard even when the stored values are over a binary alphabet or the table consists of a bounded number of columns. In this paper we study how the complexity of the problem is influenced by different parameters. First we show that the problem is W[1]-hard when parameterized by the value of the solution (and k). Then we exhibit a fixed-parameter algorithm when the problem is parameterized by the number of columns and the number of different values in any column.

  11. Global Optimal Trajectory in Chaos and NP-Hardness

    NASA Astrophysics Data System (ADS)

    Latorre, Vittorio; Gao, David Yang

    This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.

  12. Parameterizing by the Number of Numbers

    NASA Astrophysics Data System (ADS)

    Fellows, Michael R.; Gaspers, Serge; Rosamond, Frances A.

    The usefulness of parameterized algorithmics has often depended on what Niedermeier has called "the art of problem parameterization". In this paper we introduce and explore a novel but general form of parameterization: the number of numbers. Several classic numerical problems, such as Subset Sum, Partition, 3-Partition, Numerical 3-Dimensional Matching, and Numerical Matching with Target Sums, have multisets of integers as input. We initiate the study of parameterizing these problems by the number of distinct integers in the input. We rely on an FPT result for Integer Linear Programming Feasibility to show that all the above-mentioned problems are fixed-parameter tractable when parameterized in this way. In various applied settings, problem inputs often consist in part of multisets of integers or multisets of weighted objects (such as edges in a graph, or jobs to be scheduled). Such number-of-numbers parameterized problems often reduce to subproblems about transition systems of various kinds, parameterized by the size of the system description. We consider several core problems of this kind relevant to number-of-numbers parameterization. Our main hardness result considers the problem: given a non-deterministic Mealy machine M (a finite state automaton outputting a letter on each transition), an input word x, and a census requirement c for the output word specifying how many times each letter of the output alphabet should be written, decide whether there exists a computation of M reading x that outputs a word y that meets the requirement c. We show that this problem is hard for W[1]. If the question is whether there exists an input word x such that a computation of M on x outputs a word that meets c, the problem becomes fixed-parameter tractable.

  13. Thread Graphs, Linear Rank-Width and Their Algorithmic Applications

    NASA Astrophysics Data System (ADS)

    Ganian, Robert

    The introduction of tree-width by Robertson and Seymour [7] was a breakthrough in the design of graph algorithms. A lot of research since then has focused on obtaining a width measure which would be more general and still allowed efficient algorithms for a wide range of NP-hard problems on graphs of bounded width. To this end, Oum and Seymour have proposed rank-width, which allows the solution of many such hard problems on a less restricted graph classes (see e.g. [3,4]). But what about problems which are NP-hard even on graphs of bounded tree-width or even on trees? The parameter used most often for these exceptionally hard problems is path-width, however it is extremely restrictive - for example the graphs of path-width 1 are exactly paths.

  14. Public channel cryptography: chaos synchronization and Hilbert's tenth problem.

    PubMed

    Kanter, Ido; Kopelowitz, Evi; Kinzel, Wolfgang

    2008-08-22

    The synchronization process of two mutually delayed coupled deterministic chaotic maps is demonstrated both analytically and numerically. The synchronization is preserved when the mutually transmitted signals are concealed by two commutative private filters, a convolution of the truncated time-delayed output signals or some powers of the delayed output signals. The task of a passive attacker is mapped onto Hilbert's tenth problem, solving a set of nonlinear Diophantine equations, which was proven to be in the class of NP-complete problems [problems that are both NP (verifiable in nondeterministic polynomial time) and NP-hard (any NP problem can be translated into this problem)]. This bridge between nonlinear dynamics and NP-complete problems opens a horizon for new types of secure public-channel protocols.

  15. A Linear Kernel for Co-Path/Cycle Packing

    NASA Astrophysics Data System (ADS)

    Chen, Zhi-Zhong; Fellows, Michael; Fu, Bin; Jiang, Haitao; Liu, Yang; Wang, Lusheng; Zhu, Binhai

    Bounded-Degree Vertex Deletion is a fundamental problem in graph theory that has new applications in computational biology. In this paper, we address a special case of Bounded-Degree Vertex Deletion, the Co-Path/Cycle Packing problem, which asks to delete as few vertices as possible such that the graph of the remaining (residual) vertices is composed of disjoint paths and simple cycles. The problem falls into the well-known class of 'node-deletion problems with hereditary properties', is hence NP-complete and unlikely to admit a polynomial time approximation algorithm with approximation factor smaller than 2. In the framework of parameterized complexity, we present a kernelization algorithm that produces a kernel with at most 37k vertices, improving on the super-linear kernel of Fellows et al.'s general theorem for Bounded-Degree Vertex Deletion. Using this kernel,and the method of bounded search trees, we devise an FPT algorithm that runs in time O *(3.24 k ). On the negative side, we show that the problem is APX-hard and unlikely to have a kernel smaller than 2k by a reduction from Vertex Cover.

  16. NP-hardness of the cluster minimization problem revisited

    NASA Astrophysics Data System (ADS)

    Adib, Artur B.

    2005-10-01

    The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.

  17. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    NASA Astrophysics Data System (ADS)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  18. On the complexity and approximability of some Euclidean optimal summing problems

    NASA Astrophysics Data System (ADS)

    Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.

    2016-10-01

    The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.

  19. New Hardness Results for Diophantine Approximation

    NASA Astrophysics Data System (ADS)

    Eisenbrand, Friedrich; Rothvoß, Thomas

    We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.

  20. Solving Hard Computational Problems Efficiently: Asymptotic Parametric Complexity 3-Coloring Algorithm

    PubMed Central

    Martín H., José Antonio

    2013-01-01

    Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711

  1. Matrix Interdiction Problem

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, Shiva Prasad; Pan, Feng

    In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.

  2. On the Parameterized Complexity of Some Optimization Problems Related to Multiple-Interval Graphs

    NASA Astrophysics Data System (ADS)

    Jiang, Minghui

    We show that for any constant t ≥ 2, K -Independent Set and K-Dominating Set in t-track interval graphs are W[1]-hard. This settles an open question recently raised by Fellows, Hermelin, Rosamond, and Vialette. We also give an FPT algorithm for K-Clique in t-interval graphs, parameterized by both k and t, with running time max { t O(k), 2 O(klogk) } ·poly(n), where n is the number of vertices in the graph. This slightly improves the previous FPT algorithm by Fellows, Hermelin, Rosamond, and Vialette. Finally, we use the W[1]-hardness of K-Independent Set in t-track interval graphs to obtain the first parameterized intractability result for a recent bioinformatics problem called Maximal Strip Recovery (MSR). We show that MSR-d is W[1]-hard for any constant d ≥ 4 when the parameter is either the total length of the strips, or the total number of adjacencies in the strips, or the number of strips in the optimal solution.

  3. An Efficient Rank Based Approach for Closest String and Closest Substring

    PubMed Central

    2012-01-01

    This paper aims to present a new genetic approach that uses rank distance for solving two known NP-hard problems, and to compare rank distance with other distance measures for strings. The two NP-hard problems we are trying to solve are closest string and closest substring. For each problem we build a genetic algorithm and we describe the genetic operations involved. Both genetic algorithms use a fitness function based on rank distance. We compare our algorithms with other genetic algorithms that use different distance measures, such as Hamming distance or Levenshtein distance, on real DNA sequences. Our experiments show that the genetic algorithms based on rank distance have the best results. PMID:22675483

  4. Kernelization

    NASA Astrophysics Data System (ADS)

    Fomin, Fedor V.

    Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.

  5. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  6. Applying Graph Theory to Problems in Air Traffic Management

    NASA Technical Reports Server (NTRS)

    Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo

    2017-01-01

    Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.

  7. Parallel-Batch Scheduling and Transportation Coordination with Waiting Time Constraint

    PubMed Central

    Gong, Hua; Chen, Daheng; Xu, Ke

    2014-01-01

    This paper addresses a parallel-batch scheduling problem that incorporates transportation of raw materials or semifinished products before processing with waiting time constraint. The orders located at the different suppliers are transported by some vehicles to a manufacturing facility for further processing. One vehicle can load only one order in one shipment. Each order arriving at the facility must be processed in the limited waiting time. The orders are processed in batches on a parallel-batch machine, where a batch contains several orders and the processing time of the batch is the largest processing time of the orders in it. The goal is to find a schedule to minimize the sum of the total flow time and the production cost. We prove that the general problem is NP-hard in the strong sense. We also demonstrate that the problem with equal processing times on the machine is NP-hard. Furthermore, a dynamic programming algorithm in pseudopolynomial time is provided to prove its ordinarily NP-hardness. An optimal algorithm in polynomial time is presented to solve a special case with equal processing times and equal transportation times for each order. PMID:24883385

  8. Fast optimization algorithms and the cosmological constant

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad

    2017-11-01

    Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.

  9. Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.

    PubMed

    Nallaperuma, Samadhi; Neumann, Frank; Sudholt, Dirk

    2017-01-01

    Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to our theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed-time budget. We follow this approach and present a fixed-budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson Problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed-time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1+1) EA and (1+[Formula: see text]) EA algorithms for the TSP in a smoothed complexity setting, and derive the lower bounds of the expected fitness gain for a specified number of generations.

  10. Scheduling of hybrid types of machines with two-machine flowshop as the first type and a single machine as the second type

    NASA Astrophysics Data System (ADS)

    Hsiao, Ming-Chih; Su, Ling-Huey

    2018-02-01

    This research addresses the problem of scheduling hybrid machine types, in which one type is a two-machine flowshop and another type is a single machine. A job is either processed on the two-machine flowshop or on the single machine. The objective is to determine a production schedule for all jobs so as to minimize the makespan. The problem is NP-hard since the two parallel machines problem was proved to be NP-hard. Simulated annealing algorithms are developed to solve the problem optimally. A mixed integer programming (MIP) is developed and used to evaluate the performance for two SAs. Computational experiments demonstrate the efficiency of the simulated annealing algorithms, the quality of the simulated annealing algorithms will also be reported.

  11. A comparison of approaches for finding minimum identifying codes on graphs

    NASA Astrophysics Data System (ADS)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  12. NP-hardness of decoding quantum error-correction codes

    NASA Astrophysics Data System (ADS)

    Hsieh, Min-Hsiu; Le Gall, François

    2011-05-01

    Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.

  13. Learning Search Control Knowledge for Deep Space Network Scheduling

    NASA Technical Reports Server (NTRS)

    Gratch, Jonathan; Chien, Steve; DeJong, Gerald

    1993-01-01

    While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.

  14. Approaches for Subgrid Parameterization: Does Scaling Help?

    NASA Astrophysics Data System (ADS)

    Yano, Jun-Ichi

    2016-04-01

    Arguably the scaling behavior is a well-established fact in many geophysical systems. There are already many theoretical studies elucidating this issue. However, the scaling law is slow to be introduced in "operational" geophysical modelling, notably for weather forecast as well as climate projection models. The main purpose of this presentation is to ask why, and try to answer this question. As a reference point, the presentation reviews the three major approaches for traditional subgrid parameterization: moment, PDF (probability density function), and mode decomposition. The moment expansion is a standard method for describing the subgrid-scale turbulent flows both in the atmosphere and the oceans. The PDF approach is intuitively appealing as it directly deals with a distribution of variables in subgrid scale in a more direct manner. The third category, originally proposed by Aubry et al (1988) in context of the wall boundary-layer turbulence, is specifically designed to represent coherencies in compact manner by a low--dimensional dynamical system. Their original proposal adopts the proper orthogonal decomposition (POD, or empirical orthogonal functions, EOF) as their mode-decomposition basis. However, the methodology can easily be generalized into any decomposition basis. The mass-flux formulation that is currently adopted in majority of atmospheric models for parameterizing convection can also be considered a special case of the mode decomposition, adopting the segmentally-constant modes for the expansion basis. The mode decomposition can, furthermore, be re-interpreted as a type of Galarkin approach for numerically modelling the subgrid-scale processes. Simple extrapolation of this re-interpretation further suggests us that the subgrid parameterization problem may be re-interpreted as a type of mesh-refinement problem in numerical modelling. We furthermore see a link between the subgrid parameterization and downscaling problems along this line. The mode decomposition approach would also be the best framework for linking between the traditional parameterizations and the scaling perspectives. However, by seeing the link more clearly, we also see strength and weakness of introducing the scaling perspectives into parameterizations. Any diagnosis under a mode decomposition would immediately reveal a power-law nature of the spectrum. However, exploiting this knowledge in operational parameterization would be a different story. It is symbolic to realize that POD studies have been focusing on representing the largest-scale coherency within a grid box under a high truncation. This problem is already hard enough. Looking at differently, the scaling law is a very concise manner for characterizing many subgrid-scale variabilities in systems. We may even argue that the scaling law can provide almost complete subgrid-scale information in order to construct a parameterization, but with a major missing link: its amplitude must be specified by an additional condition. The condition called "closure" in the parameterization problem, and known to be a tough problem. We should also realize that the studies of the scaling behavior tend to be statistical in the sense that it hardly provides complete information for constructing a parameterization: can we specify the coefficients of all the decomposition modes by a scaling law perfectly when the first few leading modes are specified? Arguably, the renormalization group (RNG) is a very powerful tool for reducing a system with a scaling behavior into a low dimension, say, under an appropriate mode decomposition procedure. However, RNG is analytical tool: it is extremely hard to apply it to real complex geophysical systems. It appears that it is still a long way to go for us before we can begin to exploit the scaling law in order to construct operational subgrid parameterizations in effective manner.

  15. GALAXY: A new hybrid MOEA for the optimal design of Water Distribution Systems

    NASA Astrophysics Data System (ADS)

    Wang, Q.; Savić, D. A.; Kapelan, Z.

    2017-03-01

    A new hybrid optimizer, called genetically adaptive leaping algorithm for approximation and diversity (GALAXY), is proposed for dealing with the discrete, combinatorial, multiobjective design of Water Distribution Systems (WDSs), which is NP-hard and computationally intensive. The merit of GALAXY is its ability to alleviate to a great extent the parameterization issue and the high computational overhead. It follows the generational framework of Multiobjective Evolutionary Algorithms (MOEAs) and includes six search operators and several important strategies. These operators are selected based on their leaping ability in the objective space from the global and local search perspectives. These strategies steer the optimization and balance the exploration and exploitation aspects simultaneously. A highlighted feature of GALAXY lies in the fact that it eliminates majority of parameters, thus being robust and easy-to-use. The comparative studies between GALAXY and three representative MOEAs on five benchmark WDS design problems confirm its competitiveness. GALAXY can identify better converged and distributed boundary solutions efficiently and consistently, indicating a much more balanced capability between the global and local search. Moreover, its advantages over other MOEAs become more substantial as the complexity of the design problem increases.

  16. Scheduling in the Face of Uncertain Resource Consumption and Utility

    NASA Technical Reports Server (NTRS)

    Koga, Dennis (Technical Monitor); Frank, Jeremy; Dearden, Richard

    2003-01-01

    We discuss the problem of scheduling tasks that consume a resource with known capacity and where the tasks have varying utility. We consider problems in which the resource consumption and utility of each activity is described by probability distributions. In these circumstances, we would like to find schedules that exceed a lower bound on the expected utility when executed. We first show that while some of these problems are NP-complete, others are only NP-Hard. We then describe various heuristic search algorithms to solve these problems and their drawbacks. Finally, we present empirical results that characterize the behavior of these heuristics over a variety of problem classes.

  17. Near-field self-interference cancellation and quality of service multicast beamforming in full-duplex

    NASA Astrophysics Data System (ADS)

    Wu, Fei; Shao, Shihai; Tang, Youxi

    2016-10-01

    To enable simultaneous multicast downlink transmit and receive operations on the same frequency band, also known as full-duplex links between an access point and mobile users. The problem of minimizing the total power of multicast transmit beamforming is considered from the viewpoint of ensuring the suppression amount of near-field line-of-sight self-interference and guaranteeing prescribed minimum signal-to-interference-plus-noise-ratio (SINR) at each receiver of the multicast groups. Based on earlier results for multicast groups beamforming, the joint problem is easily shown to be NP-hard. A semidefinite relaxation (SDR) technique with linear program power adjust method is proposed to solve the NP-hard problem. Simulation shows that the proposed method is feasible even when the local receive antenna in nearfield and the mobile user in far-filed are in the same direction.

  18. Finding Minimal Addition Chains with a Particle Swarm Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    León-Javier, Alejandro; Cruz-Cortés, Nareli; Moreno-Armendáriz, Marco A.; Orantes-Jiménez, Sandra

    The addition chains with minimal length are the basic block to the optimal computation of finite field exponentiations. It has very important applications in the areas of error-correcting codes and cryptography. However, obtaining the shortest addition chains for a given exponent is a NP-hard problem. In this work we propose the adaptation of a Particle Swarm Optimization algorithm to deal with this problem. Our proposal is tested on several exponents whose addition chains are considered hard to find. We obtained very promising results.

  19. A restricted Steiner tree problem is solved by Geometric Method II

    NASA Astrophysics Data System (ADS)

    Lin, Dazhi; Zhang, Youlin; Lu, Xiaoxu

    2013-03-01

    The minimum Steiner tree problem has wide application background, such as transportation system, communication network, pipeline design and VISL, etc. It is unfortunately that the computational complexity of the problem is NP-hard. People are common to find some special problems to consider. In this paper, we first put forward a restricted Steiner tree problem, which the fixed vertices are in the same side of one line L and we find a vertex on L such the length of the tree is minimal. By the definition and the complexity of the Steiner tree problem, we know that the complexity of this problem is also Np-complete. In the part one, we have considered there are two fixed vertices to find the restricted Steiner tree problem. Naturally, we consider there are three fixed vertices to find the restricted Steiner tree problem. And we also use the geometric method to solve such the problem.

  20. Graphical models for optimal power flow

    DOE PAGES

    Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...

    2016-09-13

    Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less

  1. An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud

    NASA Astrophysics Data System (ADS)

    Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.

    2017-08-01

    Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.

  2. A Model and Algorithms For a Software Evolution Control System

    DTIC Science & Technology

    1993-12-01

    dynamic scheduling approaches can be found in [67). Task scheduling can also be characterized as preemptive and nonpreemptive . A task is preemptive ...is NP-hard for both the preemptive and nonpreemptive cases [671 [84). Scheduling nonpreemptive tasks with arbitrary ready times is NP-hard in both...the preemptive and nonpreemptive cases [671 [841. Scheduling nonpreemptive tasks with arbitrary ready times is NP-hard in both multiprocessor and

  3. Evolutionary Computation with Spatial Receding Horizon Control to Minimize Network Coding Resources

    PubMed Central

    Leeson, Mark S.

    2014-01-01

    The minimization of network coding resources, such as coding nodes and links, is a challenging task, not only because it is a NP-hard problem, but also because the problem scale is huge; for example, networks in real world may have thousands or even millions of nodes and links. Genetic algorithms (GAs) have a good potential of resolving NP-hard problems like the network coding problem (NCP), but as a population-based algorithm, serious scalability and applicability problems are often confronted when GAs are applied to large- or huge-scale systems. Inspired by the temporal receding horizon control in control engineering, this paper proposes a novel spatial receding horizon control (SRHC) strategy as a network partitioning technology, and then designs an efficient GA to tackle the NCP. Traditional network partitioning methods can be viewed as a special case of the proposed SRHC, that is, one-step-wide SRHC, whilst the method in this paper is a generalized N-step-wide SRHC, which can make a better use of global information of network topologies. Besides the SRHC strategy, some useful designs are also reported in this paper. The advantages of the proposed SRHC and GA for the NCP are illustrated by extensive experiments, and they have a good potential of being extended to other large-scale complex problems. PMID:24883371

  4. Robust quantum optimizer with full connectivity.

    PubMed

    Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P

    2017-04-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.

  5. The complexity of divisibility.

    PubMed

    Bausch, Johannes; Cubitt, Toby

    2016-09-01

    We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.

  6. Nanoparticle hardness controls the internalization pathway for drug delivery

    NASA Astrophysics Data System (ADS)

    Li, Ye; Zhang, Xianren; Cao, Dapeng

    2015-01-01

    Nanoparticle (NP)-based drug delivery systems offer fundamental advantages over current therapeutic agents that commonly display a longer circulation time, lower toxicity, specific targeted release, and greater bioavailability. For successful NP-based drug delivery it is essential that the drug-carrying nanocarriers can be internalized by the target cells and transported to specific sites, and the inefficient internalization of nanocarriers is often one of the major sources for drug resistance. In this work, we use the dissipative particle dynamics simulation to investigate the effect of NP hardness on their internalization efficiency. Three simplified models of NP platforms for drug delivery, including polymeric NP, liposome and solid NP, are designed here to represent increasing nanocarrier hardness. Simulation results indicate that NP hardness controls the internalization pathway for drug delivery. Rigid NPs can enter the cell by a pathway of endocytosis, whereas for soft NPs the endocytosis process can be inhibited or frustrated due to wrapping-induced shape deformation and non-uniform ligand distribution. Instead, soft NPs tend to find one of three penetration pathways to enter the cell membrane via rearranging their hydrophobic and hydrophilic segments. Finally, we show that the interaction between nanocarriers and drug molecules is also essential for effective drug delivery.

  7. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems.

    PubMed

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A

    2017-12-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction.

  8. Efficient Sum of Outer Products Dictionary Learning (SOUP-DIL) and Its Application to Inverse Problems

    PubMed Central

    Ravishankar, Saiprasad; Nadakuditi, Raj Rao; Fessler, Jeffrey A.

    2017-01-01

    The sparsity of signals in a transform domain or dictionary has been exploited in applications such as compression, denoising and inverse problems. More recently, data-driven adaptation of synthesis dictionaries has shown promise compared to analytical dictionary models. However, dictionary learning problems are typically non-convex and NP-hard, and the usual alternating minimization approaches for these problems are often computationally expensive, with the computations dominated by the NP-hard synthesis sparse coding step. This paper exploits the ideas that drive algorithms such as K-SVD, and investigates in detail efficient methods for aggregate sparsity penalized dictionary learning by first approximating the data with a sum of sparse rank-one matrices (outer products) and then using a block coordinate descent approach to estimate the unknowns. The resulting block coordinate descent algorithms involve efficient closed-form solutions. Furthermore, we consider the problem of dictionary-blind image reconstruction, and propose novel and efficient algorithms for adaptive image reconstruction using block coordinate descent and sum of outer products methodologies. We provide a convergence study of the algorithms for dictionary learning and dictionary-blind image reconstruction. Our numerical experiments show the promising performance and speedups provided by the proposed methods over previous schemes in sparse data representation and compressed sensing-based image reconstruction. PMID:29376111

  9. Semiclassical approach to finite-temperature quantum annealing with trapped ions

    NASA Astrophysics Data System (ADS)

    Raventós, David; Graß, Tobias; Juliá-Díaz, Bruno; Lewenstein, Maciej

    2018-05-01

    Recently it has been demonstrated that an ensemble of trapped ions may serve as a quantum annealer for the number-partitioning problem [Nat. Commun. 7, 11524 (2016), 10.1038/ncomms11524]. This hard computational problem may be addressed by employing a tunable spin-glass architecture. Following the proposal of the trapped-ion annealer, we study here its robustness against thermal effects; that is, we investigate the role played by thermal phonons. For the efficient description of the system, we use a semiclassical approach, and benchmark it against the exact quantum evolution. The aim is to understand better and characterize how the quantum device approaches a solution of an otherwise difficult to solve NP-hard problem.

  10. Comparing genetic algorithm and particle swarm optimization for solving capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Iswari, T.; Asih, A. M. S.

    2018-04-01

    In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.

  11. Fully polynomial-time approximation scheme for a special case of a quadratic Euclidean 2-clustering problem

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Khandeev, V. I.

    2016-02-01

    The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.

  12. A Decomposition Approach for Shipboard Manpower Scheduling

    DTIC Science & Technology

    2009-01-01

    generalizes the bin-packing problem with no conflicts ( BPP ) which is known to be NP-hard (Garey and Johnson 1979). Hence our focus is to obtain a lower...to the BPP ; while the so called constrained packing lower bound also takes conflict constraints into account. Their computational study indicates

  13. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    PubMed

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  14. Approximation algorithms for the min-power symmetric connectivity problem

    NASA Astrophysics Data System (ADS)

    Plotnikov, Roman; Erzin, Adil; Mladenovic, Nenad

    2016-10-01

    We consider the NP-hard problem of synthesis of optimal spanning communication subgraph in a given arbitrary simple edge-weighted graph. This problem occurs in the wireless networks while minimizing the total transmission power consumptions. We propose several new heuristics based on the variable neighborhood search metaheuristic for the approximation solution of the problem. We have performed a numerical experiment where all proposed algorithms have been executed on the randomly generated test samples. For these instances, on average, our algorithms outperform the previously known heuristics.

  15. Scheduling with non-decreasing deterioration jobs and variable maintenance activities on a single machine

    NASA Astrophysics Data System (ADS)

    Zhang, Xingong; Yin, Yunqiang; Wu, Chin-Chia

    2017-01-01

    There is a situation found in many manufacturing systems, such as steel rolling mills, fire fighting or single-server cycle-queues, where a job that is processed later consumes more time than that same job when processed earlier. The research finds that machine maintenance can improve the worsening of processing conditions. After maintenance activity, the machine will be restored. The maintenance duration is a positive and non-decreasing differentiable convex function of the total processing times of the jobs between maintenance activities. Motivated by this observation, the makespan and the total completion time minimization problems in the scheduling of jobs with non-decreasing rates of job processing time on a single machine are considered in this article. It is shown that both the makespan and the total completion time minimization problems are NP-hard in the strong sense when the number of maintenance activities is arbitrary, while the makespan minimization problem is NP-hard in the ordinary sense when the number of maintenance activities is fixed. If the deterioration rates of the jobs are identical and the maintenance duration is a linear function of the total processing times of the jobs between maintenance activities, then this article shows that the group balance principle is satisfied for the makespan minimization problem. Furthermore, two polynomial-time algorithms are presented for solving the makespan problem and the total completion time problem under identical deterioration rates, respectively.

  16. Finding the probability of infection in an SIR network is NP-Hard

    PubMed Central

    Shapiro, Michael; Delgado-Eckert, Edgar

    2012-01-01

    It is the purpose of this article to review results that have long been known to communications network engineers and have direct application to epidemiology on networks. A common approach in epidemiology is to study the transmission of a disease in a population where each individual is initially susceptible (S), may become infective (I) and then removed or recovered (R) and plays no further epidemiological role. Much of the recent work gives explicit consideration to the network of social interactions or disease-transmitting contacts and attendant probability of transmission for each interacting pair. The state of such a network is an assignment of the values {S, I, R} to its members. Given such a network, an initial state and a particular susceptible individual, we would like to compute their probability of becoming infected in the course of an epidemic. It turns out that this and related problems are NP-hard. In particular, it belongs in a class of problems for which no efficient algorithms for their solution are known. Moreover, finding an efficient algorithm for the solution of any problem in this class would entail a major breakthrough in theoretical computer science. PMID:22824138

  17. Computational Study for Planar Connected Dominating Set Problem

    NASA Astrophysics Data System (ADS)

    Marzban, Marjan; Gu, Qian-Ping; Jia, Xiaohua

    The connected dominating set (CDS) problem is a well studied NP-hard problem with many important applications. Dorn et al. [ESA2005, LNCS3669,pp95-106] introduce a new technique to generate 2^{O(sqrt{n})} time and fixed-parameter algorithms for a number of non-local hard problems, including the CDS problem in planar graphs. The practical performance of this algorithm is yet to be evaluated. We perform a computational study for such an evaluation. The results show that the size of instances can be solved by the algorithm mainly depends on the branchwidth of the instances, coinciding with the theoretical result. For graphs with small or moderate branchwidth, the CDS problem instances with size up to a few thousands edges can be solved in a practical time and memory space. This suggests that the branch-decomposition based algorithms can be practical for the planar CDS problem.

  18. Computing quantum discord is NP-complete

    NASA Astrophysics Data System (ADS)

    Huang, Yichen

    2014-03-01

    We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable.

  19. Using Online Algorithms to Solve NP-Hard Problems More Efficiently in Practice

    DTIC Science & Technology

    2007-12-01

    bounds. For the openstacks , TPP, and pipesworld domains, our results were qualitatively different: most instances in these domains were either easy...between our results in these two sets of domains. For most in- stances in the openstacks domain we found no k values that elicited a “yes” answer in

  20. Portfolios of quantum algorithms.

    PubMed

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  1. On unified modeling, theory, and method for solving multi-scale global optimization problems

    NASA Astrophysics Data System (ADS)

    Gao, David Yang

    2016-10-01

    A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.

  2. Robust quantum optimizer with full connectivity

    PubMed Central

    Nigg, Simon E.; Lörch, Niels; Tiwari, Rakesh P.

    2017-01-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation. PMID:28435880

  3. Approximate solution of the p-median minimization problem

    NASA Astrophysics Data System (ADS)

    Il'ev, V. P.; Il'eva, S. D.; Navrotskaya, A. A.

    2016-09-01

    A version of the facility location problem (the well-known p-median minimization problem) and its generalization—the problem of minimizing a supermodular set function—is studied. These problems are NP-hard, and they are approximately solved by a gradient algorithm that is a discrete analog of the steepest descent algorithm. A priori bounds on the worst-case behavior of the gradient algorithm for the problems under consideration are obtained. As a consequence, a bound on the performance guarantee of the gradient algorithm for the p-median minimization problem in terms of the production and transportation cost matrix is obtained.

  4. A Winner Determination Algorithm for Combinatorial Auctions Based on Hybrid Artificial Fish Swarm Algorithm

    NASA Astrophysics Data System (ADS)

    Zheng, Genrang; Lin, ZhengChun

    The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.

  5. On the scaling behavior of hardness with ligament diameter of nanoporous-Au: Constrained motion of dislocations along the ligaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viswanath, R. N.; Polaki, S. R.; Rajaraman, R.

    The scaling behavior of hardness with ligament diameter and vacancy defect concentration in nanoporous Au (np-Au) has been investigated using a combination of Vickers Hardness, Scanning electron microscopy, and positron lifetime measurements. It is shown that for np-Au, the hardness scales with the ligament diameter with an exponent of −0.3, that is, at variance with the conventional Hall-Petch exponent of −0.5 for bulk systems, as seen in the controlled experiments on cold worked Au with varying grain size. The hardness of np-Au correlates with the vacancy concentration C{sub V} within the ligaments, as estimated from positron lifetime experiments, and scalesmore » as C{sub V}{sup 1/2}, pointing to the interaction of dislocations with vacancies. The distinctive Hall-Petch exponent of −0.3 seen for np-Au, with ligament diameters in the range of 5–150 nm, is rationalized by invoking the constrained motion of dislocations along the ligaments.« less

  6. Pricing and location decisions in multi-objective facility location problem with M/M/m/k queuing systems

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin

    2017-01-01

    This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.

  7. Intrinsic optimization using stochastic nanomagnets

    PubMed Central

    Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo

    2017-01-01

    This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets. PMID:28295053

  8. Intrinsic optimization using stochastic nanomagnets

    NASA Astrophysics Data System (ADS)

    Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo

    2017-03-01

    This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets.

  9. On the Hardness of Subset Sum Problem from Different Intervals

    NASA Astrophysics Data System (ADS)

    Kogure, Jun; Kunihiro, Noboru; Yamamoto, Hirosuke

    The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.

  10. Heuristic methods for the single machine scheduling problem with different ready times and a common due date

    NASA Astrophysics Data System (ADS)

    Birgin, Ernesto G.; Ronconi, Débora P.

    2012-10-01

    The single machine scheduling problem with a common due date and non-identical ready times for the jobs is examined in this work. Performance is measured by the minimization of the weighted sum of earliness and tardiness penalties of the jobs. Since this problem is NP-hard, the application of constructive heuristics that exploit specific characteristics of the problem to improve their performance is investigated. The proposed approaches are examined through a computational comparative study on a set of 280 benchmark test problems with up to 1000 jobs.

  11. The TSP-approach to approximate solving the m-Cycles Cover Problem

    NASA Astrophysics Data System (ADS)

    Gimadi, Edward Kh.; Rykov, Ivan; Tsidulko, Oxana

    2016-10-01

    In the m-Cycles Cover problem it is required to find a collection of m vertex-disjoint cycles that covers all vertices of the graph and the total weight of edges in the cover is minimum (or maximum). The problem is a generalization of the Traveling salesmen problem. It is strongly NP-hard. We discuss a TSP-approach that gives polynomial approximate solutions for this problem. It transforms an approximation TSP algorithm into an approximation m-CCP algorithm. In this paper we present a number of successful transformations with proven performance guarantees for the obtained solutions.

  12. Tracking Multiple People Online and in Real Time

    DTIC Science & Technology

    2015-12-21

    NO. 0704-0188 3. DATES COVERED (From - To) - UU UU UU UU 21-12-2015 Approved for public release; distribution is unlimited. Tracking multiple people ...online and in real time We cast the problem of tracking several people as a graph partitioning problem that takes the form of an NP-hard binary...PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. Duke University 2200 West Main Street Suite 710 Durham, NC 27705 -4010 ABSTRACT Tracking multiple

  13. On the complexity of some quadratic Euclidean 2-clustering problems

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Pyatkin, A. V.

    2016-03-01

    Some problems of partitioning a finite set of points of Euclidean space into two clusters are considered. In these problems, the following criteria are minimized: (1) the sum over both clusters of the sums of squared pairwise distances between the elements of the cluster and (2) the sum of the (multiplied by the cardinalities of the clusters) sums of squared distances from the elements of the cluster to its geometric center, where the geometric center (or centroid) of a cluster is defined as the mean value of the elements in that cluster. Additionally, another problem close to (2) is considered, where the desired center of one of the clusters is given as input, while the center of the other cluster is unknown (is the variable to be optimized) as in problem (2). Two variants of the problems are analyzed, in which the cardinalities of the clusters are (1) parts of the input or (2) optimization variables. It is proved that all the considered problems are strongly NP-hard and that, in general, there is no fully polynomial-time approximation scheme for them (unless P = NP).

  14. Solving the Container Stowage Problem (CSP) using Particle Swarm Optimization (PSO)

    NASA Astrophysics Data System (ADS)

    Matsaini; Santosa, Budi

    2018-04-01

    Container Stowage Problem (CSP) is a problem of containers arrangement into ships by considering rules such as: total weight, weight of one stack, destination, equilibrium, and placement of containers on vessel. Container stowage problem is combinatorial problem and hard to solve with enumeration technique. It is an NP-Hard Problem. Therefore, to find a solution, metaheuristics is preferred. The objective of solving the problem is to minimize the amount of shifting such that the unloading time is minimized. Particle Swarm Optimization (PSO) is proposed to solve the problem. The implementation of PSO is combined with some steps which are stack position change rules, stack changes based on destination, and stack changes based on the weight type of the stacks (light, medium, and heavy). The proposed method was applied on five different cases. The results were compared to Bee Swarm Optimization (BSO) and heuristics method. PSO provided mean of 0.87% gap and time gap of 60 second. While BSO provided mean of 2,98% gap and 459,6 second to the heuristcs.

  15. A New Approach for Solving the Generalized Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Pop, P. C.; Matei, O.; Sabo, C.

    The generalized traveling problem (GTSP) is an extension of the classical traveling salesman problem. The GTSP is known to be an NP-hard problem and has many interesting applications. In this paper we present a local-global approach for the generalized traveling salesman problem. Based on this approach we describe a novel hybrid metaheuristic algorithm for solving the problem using genetic algorithms. Computational results are reported for Euclidean TSPlib instances and compared with the existing ones. The obtained results point out that our hybrid algorithm is an appropriate method to explore the search space of this complex problem and leads to good solutions in a reasonable amount of time.

  16. Antimicrobial acrylic materials with in situ generated silver nanoparticles.

    PubMed

    Oei, James D; Zhao, William W; Chu, Lianrui; DeSilva, Mauris N; Ghimire, Abishek; Rawls, H Ralph; Whang, Kyumin

    2012-02-01

    Polymethyl methacrylate (PMMA) is widely used to treat traumatic head injuries (cranioplasty) and orthopedic injuries (bone cement), but there is a problem with implant-centered infections. With organisms such as Acinetobacter baumannii and methicillin-resistant staphylococcus aureus developing resistance to antibiotics, there is a need for novel antimicrobial delivery mechanisms without risk of developing resistant organisms. To develop a novel antimicrobial implant material by generating silver nanoparticles (AgNP) in situ in PMMA. All PMMA samples with AgNP's (AgNP-PMMA) released Ag(+) ions in vitro for over 28 days. In vitro antimicrobial assays revealed that these samples (even samples with the slowest release rate) inhibited 99.9% of bacteria against four different strains of bacteria. Long-term antimicrobial assay showed a continued antibacterial effect past 28 days. Some AgNP-loaded PMMA groups had comparable Durometer-D hardness (a measure of degree of cure) and modulus to control PMMA, but all experimental groups had slightly lower ultimate transverse strengths. AgNP-PMMA demonstrated a tremendously broad-spectrum and long-intermediate-term antimicrobial effect with comparable mechanical properties to control PMMA. Current efforts are focused on further improving mechanical properties by reducing AgNP loading and assessing fatigue properties. Copyright © 2011 Wiley Periodicals, Inc.

  17. A new graph-based method for pairwise global network alignment

    PubMed Central

    Klau, Gunnar W

    2009-01-01

    Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162

  18. A similarity score-based two-phase heuristic approach to solve the dynamic cellular facility layout for manufacturing systems

    NASA Astrophysics Data System (ADS)

    Kumar, Ravi; Singh, Surya Prakash

    2017-11-01

    The dynamic cellular facility layout problem (DCFLP) is a well-known NP-hard problem. It has been estimated that the efficient design of DCFLP reduces the manufacturing cost of products by maintaining the minimum material flow among all machines in all cells, as the material flow contributes around 10-30% of the total product cost. However, being NP hard, solving the DCFLP optimally is very difficult in reasonable time. Therefore, this article proposes a novel similarity score-based two-phase heuristic approach to solve the DCFLP optimally considering multiple products in multiple times to be manufactured in the manufacturing layout. In the first phase of the proposed heuristic, a machine-cell cluster is created based on similarity scores between machines. This is provided as an input to the second phase to minimize inter/intracell material handling costs and rearrangement costs over the entire planning period. The solution methodology of the proposed approach is demonstrated. To show the efficiency of the two-phase heuristic approach, 21 instances are generated and solved using the optimization software package LINGO. The results show that the proposed approach can optimally solve the DCFLP in reasonable time.

  19. Monkey search algorithm for ECE components partitioning

    NASA Astrophysics Data System (ADS)

    Kuliev, Elmar; Kureichik, Vladimir; Kureichik, Vladimir, Jr.

    2018-05-01

    The paper considers one of the important design problems – a partitioning of electronic computer equipment (ECE) components (blocks). It belongs to the NP-hard class of problems and has a combinatorial and logic nature. In the paper, a partitioning problem formulation can be found as a partition of graph into parts. To solve the given problem, the authors suggest using a bioinspired approach based on a monkey search algorithm. Based on the developed software, computational experiments were carried out that show the algorithm efficiency, as well as its recommended settings for obtaining more effective solutions in comparison with a genetic algorithm.

  20. A parallel-machine scheduling problem with two competing agents

    NASA Astrophysics Data System (ADS)

    Lee, Wen-Chiung; Chung, Yu-Hsiang; Wang, Jen-Ya

    2017-06-01

    Scheduling with two competing agents has become popular in recent years. Most of the research has focused on single-machine problems. This article considers a parallel-machine problem, the objective of which is to minimize the total completion time of jobs from the first agent given that the maximum tardiness of jobs from the second agent cannot exceed an upper bound. The NP-hardness of this problem is also examined. A genetic algorithm equipped with local search is proposed to search for the near-optimal solution. Computational experiments are conducted to evaluate the proposed genetic algorithm.

  1. Grover Search and the No-Signaling Principle

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Bouland, Adam; Jordan, Stephen P.

    2016-09-01

    Two of the key properties of quantum physics are the no-signaling principle and the Grover search lower bound. That is, despite admitting stronger-than-classical correlations, quantum mechanics does not imply superluminal signaling, and despite a form of exponential parallelism, quantum mechanics does not imply polynomial-time brute force solution of NP-complete problems. Here, we investigate the degree to which these two properties are connected. We examine four classes of deviations from quantum mechanics, for which we draw inspiration from the literature on the black hole information paradox. We show that in these models, the physical resources required to send a superluminal signal scale polynomially with the resources needed to speed up Grover's algorithm. Hence the no-signaling principle is equivalent to the inability to solve NP-hard problems efficiently by brute force within the classes of theories analyzed.

  2. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.

    PubMed

    Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.

  3. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem

    PubMed Central

    Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849

  4. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  5. A hybrid heuristic for the multiple choice multidimensional knapsack problem

    NASA Astrophysics Data System (ADS)

    Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd

    2013-08-01

    In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.

  6. Better Decomposition Heuristics for the Maximum-Weight Connected Graph Problem Using Betweenness Centrality

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru

    We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.

  7. Optical solver of combinatorial problems: nanotechnological approach.

    PubMed

    Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor

    2013-09-01

    We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.

  8. Number Partitioning via Quantum Adiabatic Computation

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadim N.; Toussaint, Udo

    2002-01-01

    We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.

  9. On-Orbit Range Set Applications

    NASA Astrophysics Data System (ADS)

    Holzinger, M.; Scheeres, D.

    2011-09-01

    History and methodology of Δv range set computation is briefly reviewed, followed by a short summary of the Δv optimal spacecraft servicing problem literature. Service vehicle placement is approached from a Δv range set viewpoint, providing a framework under which the analysis becomes quite geometric and intuitive. The optimal servicing tour design problem is shown to be a specific instantiation of the metric- Traveling Salesman Problem (TSP), which in general is an NP-hard problem. The Δv-TSP is argued to be quite similar to the Euclidean-TSP, for which approximate optimal solutions may be found in polynomial time. Applications of range sets are demonstrated using analytical and simulation results.

  10. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  11. Fabrication of naturel pumice/hydroxyapatite composite for biomedical engineering.

    PubMed

    Komur, Baran; Lohse, Tim; Can, Hatice Merve; Khalilova, Gulnar; Geçimli, Zeynep Nur; Aydoğdu, Mehmet Onur; Kalkandelen, Cevriye; Stan, George E; Sahin, Yesim Muge; Sengil, Ahmed Zeki; Suleymanoglu, Mediha; Kuruca, Serap Erdem; Oktar, Faik Nuzhet; Salman, Serdar; Ekren, Nazmi; Ficai, Anton; Gunduz, Oguzhan

    2016-07-07

    We evaluated the Bovine hydroxyapatite (BHA) structure. BHA powder was admixed with 5 and 10 wt% natural pumice (NP). Compression strength, Vickers micro hardness, Fourier transform infrared spectroscopy, scanning electron microscopy (SEM) and X-ray diffraction studies were performed on the final NP-BHA composite products. The cells proliferation was investigated by MTT assay and SEM. Furthermore, the antimicrobial activity of NP-BHA samples was interrogated. Variances in the sintering temperature (for 5 wt% NP composites) between 1000 and 1300 °C, reveal about 700 % increase in the microhardness (~100 and 775 HV, respectively). Composites prepared at 1300 °C demonstrate the greatest compression strength with comparable result for 5 wt% NP content (87 MPa), which are significantly better than those for 10 wt% and those that do not include any NP (below 60 MPa, respectively). The results suggested the optimal parameters for the preparation of NP-BHA composites with increased mechanical properties and biocompatibility. Changes in micro-hardness and compression strength can be tailored by the tuning the NP concentration and sintering temperature. NP-BHA composites have demonstrated a remarkable potential for biomedical engineering applications such as bone graft and implant.

  12. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  13. Solving Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) using BRKGA with local search

    NASA Astrophysics Data System (ADS)

    Prasetyo, H.; Alfatsani, M. A.; Fauza, G.

    2018-05-01

    The main issue in vehicle routing problem (VRP) is finding the shortest route of product distribution from the depot to outlets to minimize total cost of distribution. Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) is one of the variants of VRP that accommodates vehicle capacity and distribution period. Since the main problem of CCVRPTW is considered a non-polynomial hard (NP-hard) problem, it requires an efficient and effective algorithm to solve the problem. This study was aimed to develop Biased Random Key Genetic Algorithm (BRKGA) that is combined with local search to solve the problem of CCVRPTW. The algorithm design was then coded by MATLAB. Using numerical test, optimum algorithm parameters were set and compared with the heuristic method and Standard BRKGA to solve a case study on soft drink distribution. Results showed that BRKGA combined with local search resulted in lower total distribution cost compared with the heuristic method. Moreover, the developed algorithm was found to be successful in increasing the performance of Standard BRKGA.

  14. Hardness and Elastic Modulus on Six-Fold Symmetry Gold Nanoparticles

    PubMed Central

    Ramos, Manuel; Ortiz-Jordan, Luis; Hurtado-Macias, Abel; Flores, Sergio; Elizalde-Galindo, José T.; Rocha, Carmen; Torres, Brenda; Zarei-Chaleshtori, Maryam; Chianelli, Russell R.

    2013-01-01

    The chemical synthesis of gold nanoparticles (NP) by using gold (III) chloride trihydrate (HAuCl∙3H2O) and sodium citrate as a reducing agent in aqueous conditions at 100 °C is presented here. Gold nanoparticles areformed by a galvanic replacement mechanism as described by Lee and Messiel. Morphology of gold-NP was analyzed by way of high-resolution transmission electron microscopy; results indicate a six-fold icosahedral symmetry with an average size distribution of 22 nm. In order to understand the mechanical behaviors, like hardness and elastic moduli, gold-NP were subjected to nanoindentation measurements—obtaining a hardness value of 1.72 GPa and elastic modulus of 100 GPa in a 3–5 nm of displacement at the nanoparticle’s surface. PMID:28809302

  15. Two-Stage orders sequencing system for mixed-model assembly

    NASA Astrophysics Data System (ADS)

    Zemczak, M.; Skolud, B.; Krenczyk, D.

    2015-11-01

    In the paper, the authors focus on the NP-hard problem of orders sequencing, formulated similarly to Car Sequencing Problem (CSP). The object of the research is the assembly line in an automotive industry company, on which few different models of products, each in a certain number of versions, are assembled on the shared resources, set in a line. Such production type is usually determined as a mixed-model production, and arose from the necessity of manufacturing customized products on the basis of very specific orders from single clients. The producers are nowadays obliged to provide each client the possibility to determine a huge amount of the features of the product they are willing to buy, as the competition in the automotive market is large. Due to the previously mentioned nature of the problem (NP-hard), in the given time period only satisfactory solutions are sought, as the optimal solution method has not yet been found. Most of the researchers that implemented inaccurate methods (e.g. evolutionary algorithms) to solving sequencing problems dropped the research after testing phase, as they were not able to obtain reproducible results, and met problems while determining the quality of the received solutions. Therefore a new approach to solving the problem, presented in this paper as a sequencing system is being developed. The sequencing system consists of a set of determined rules, implemented into computer environment. The system itself works in two stages. First of them is connected with the determination of a place in the storage buffer to which certain production orders should be sent. In the second stage of functioning, precise sets of sequences are determined and evaluated for certain parts of the storage buffer under certain criteria.

  16. Algorithmics - Is There Hope for a Unified Theory?

    NASA Astrophysics Data System (ADS)

    Hromkovič, Juraj

    Computer science was born with the formal definition of the notion of an algorithm. This definition provides clear limits of automatization, separating problems into algorithmically solvable problems and algorithmically unsolvable ones. The second big bang of computer science was the development of the concept of computational complexity. People recognized that problems that do not admit efficient algorithms are not solvable in practice. The search for a reasonable, clear and robust definition of the class of practically solvable algorithmic tasks started with the notion of the class {P} and of {NP}-completeness. In spite of the fact that this robust concept is still fundamental for judging the hardness of computational problems, a variety of approaches was developed for solving instances of {NP}-hard problems in many applications. Our 40-years short attempt to fix the fuzzy border between the practically solvable problems and the practically unsolvable ones partially reminds of the never-ending search for the definition of "life" in biology or for the definitions of matter and energy in physics. Can the search for the formal notion of "practical solvability" also become a never-ending story or is there hope for getting a well-accepted, robust definition of it? Hopefully, it is not surprising that we are not able to answer this question in this invited talk. But to deal with this question is of crucial importance, because only due to enormous effort scientists get a better and better feeling of what the fundamental notions of science like life and energy mean. In the flow of numerous technical results, we must not forget the fact that most of the essential revolutionary contributions to science were done by defining new concepts and notions.

  17. The checkpoint ordering problem

    PubMed Central

    Hungerländer, P.

    2017-01-01

    Abstract We suggest a new variant of a row layout problem: Find an ordering of n departments with given lengths such that the total weighted sum of their distances to a given checkpoint is minimized. The Checkpoint Ordering Problem (COP) is both of theoretical and practical interest. It has several applications and is conceptually related to some well-studied combinatorial optimization problems, namely the Single-Row Facility Layout Problem, the Linear Ordering Problem and a variant of parallel machine scheduling. In this paper we study the complexity of the (COP) and its special cases. The general version of the (COP) with an arbitrary but fixed number of checkpoints is NP-hard in the weak sense. We propose both a dynamic programming algorithm and an integer linear programming approach for the (COP) . Our computational experiments indicate that the (COP) is hard to solve in practice. While the run time of the dynamic programming algorithm strongly depends on the length of the departments, the integer linear programming approach is able to solve instances with up to 25 departments to optimality. PMID:29170574

  18. Improved dynamic MRI reconstruction by exploiting sparsity and rank-deficiency.

    PubMed

    Majumdar, Angshul

    2013-06-01

    In this paper we address the problem of dynamic MRI reconstruction from partially sampled K-space data. Our work is motivated by previous studies in this area that proposed exploiting the spatiotemporal correlation of the dynamic MRI sequence by posing the reconstruction problem as a least squares minimization regularized by sparsity and low-rank penalties. Ideally the sparsity and low-rank penalties should be represented by the l(0)-norm and the rank of a matrix; however both are NP hard penalties. The previous studies used the convex l(1)-norm as a surrogate for the l(0)-norm and the non-convex Schatten-q norm (0

  19. A multilevel probabilistic beam search algorithm for the shortest common supersequence problem.

    PubMed

    Gallardo, José E

    2012-01-01

    The shortest common supersequence problem is a classical problem with many applications in different fields such as planning, Artificial Intelligence and especially in Bioinformatics. Due to its NP-hardness, we can not expect to efficiently solve this problem using conventional exact techniques. This paper presents a heuristic to tackle this problem based on the use at different levels of a probabilistic variant of a classical heuristic known as Beam Search. The proposed algorithm is empirically analysed and compared to current approaches in the literature. Experiments show that it provides better quality solutions in a reasonable time for medium and large instances of the problem. For very large instances, our heuristic also provides better solutions, but required execution times may increase considerably.

  20. On the asymptotic optimality and improved strategies of SPTB heuristic for open-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Zhang, Zhihai

    2014-08-01

    This article investigates the open-shop scheduling problem with the optimal criterion of minimising the sum of quadratic completion times. For this NP-hard problem, the asymptotic optimality of the shortest processing time block (SPTB) heuristic is proven in the sense of limit. Moreover, three different improvements, namely, the job-insert scheme, tabu search and genetic algorithm, are introduced to enhance the quality of the original solution generated by the SPTB heuristic. At the end of the article, a series of numerical experiments demonstrate the convergence of the heuristic, the performance of the improvements and the effectiveness of the quadratic objective.

  1. Investigation of Zero Knowledge Proof Approaches Based on Graph Theory

    DTIC Science & Technology

    2011-02-01

    that appears frequently in the literature is a metaheuristic algorithm called the Pilot Method. The Pilot Method improves upon another heuristic...Annual ACM-SIAM Symposium on Discrete Algorithms . Miami: ACM, 2006. 1-10. Voß, S., and C. Duin. "Look Ahead Features in Metaheuristics ." MIC2003...The Fifth Metaheuristics International Conference, 2003: 79-1 - 79-7. Woeginger, G.J. "Exact Algorithms for NP-Hard Problems: A Survey." Lecture

  2. Optimization of lattice surgery is NP-hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon J.

    2017-09-01

    The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.

  3. Characterizing L1-norm best-fit subspaces

    NASA Astrophysics Data System (ADS)

    Brooks, J. Paul; Dulá, José H.

    2017-05-01

    Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.

  4. Quantum Computing: Solving Complex Problems

    ScienceCinema

    DiVincenzo, David

    2018-05-22

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  5. Genetic algorithms for the vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Volna, Eva

    2016-06-01

    The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.

  6. Fitness Probability Distribution of Bit-Flip Mutation.

    PubMed

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  7. Sequential Quadratic Programming Algorithms for Optimization

    DTIC Science & Technology

    1989-08-01

    quadratic program- ma ng (SQ(2l ) aIiatain.seenis to be relgarded aIs tie( buest choice for the solution of smiall. dlense problema (see S tour L)toS...For the step along d, note that a < nOing + 3 szH + i3.ninA A a K f~Iz,;nd and from Id1 _< ,,, we must have that for some /3 , np , 11P11 < dn"p. 5.2...Nevertheless, many of these problems are considered hard to solve. Moreover, for some of these problems the assumptions made in Chapter 2 to establish the

  8. Nash Social Welfare in Multiagent Resource Allocation

    NASA Astrophysics Data System (ADS)

    Ramezani, Sara; Endriss, Ulle

    We study different aspects of the multiagent resource allocation problem when the objective is to find an allocation that maximizes Nash social welfare, the product of the utilities of the individual agents. The Nash solution is an important welfare criterion that combines efficiency and fairness considerations. We show that the problem of finding an optimal outcome is NP-hard for a number of different languages for representing agent preferences; we establish new results regarding convergence to Nash-optimal outcomes in a distributed negotiation framework; and we design and test algorithms similar to those applied in combinatorial auctions for computing such an outcome directly.

  9. Numerical taxonomy on data: Experimental results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cohen, J.; Farach, M.

    1997-12-01

    The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.

  10. Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems

    PubMed Central

    Fonseca Guerra, Gabriel A.; Furber, Steve B.

    2017-01-01

    Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart. PMID:29311791

  11. Enabling Next-Generation Multicore Platforms in Embedded Applications

    DTIC Science & Technology

    2014-04-01

    mapping to sets 129 − 256 ) to the second page in memory, color 2 (sets 257 − 384) to the third page, and so on. Then, after the 32nd page, all 212 sets...the Real-Time Nested Locking Protocol (RNLP) [56], a recently developed multiprocessor real-time locking protocol that optimally supports the...RELEASE; DISTRIBUTION UNLIMITED 15 In general, the problems of optimally assigning tasks to processors and colors to tasks are both NP-hard in the

  12. Theoretical Analysis of Local Search and Simple Evolutionary Algorithms for the Generalized Travelling Salesperson Problem.

    PubMed

    Pourhassan, Mojgan; Neumann, Frank

    2018-06-22

    The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.

  13. Hybrid Metaheuristics for Solving a Fuzzy Single Batch-Processing Machine Scheduling Problem

    PubMed Central

    Molla-Alizadeh-Zavardehi, S.; Tavakkoli-Moghaddam, R.; Lotfi, F. Hosseinzadeh

    2014-01-01

    This paper deals with a problem of minimizing total weighted tardiness of jobs in a real-world single batch-processing machine (SBPM) scheduling in the presence of fuzzy due date. In this paper, first a fuzzy mixed integer linear programming model is developed. Then, due to the complexity of the problem, which is NP-hard, we design two hybrid metaheuristics called GA-VNS and VNS-SA applying the advantages of genetic algorithm (GA), variable neighborhood search (VNS), and simulated annealing (SA) frameworks. Besides, we propose three fuzzy earliest due date heuristics to solve the given problem. Through computational experiments with several random test problems, a robust calibration is applied on the parameters. Finally, computational results on different-scale test problems are presented to compare the proposed algorithms. PMID:24883359

  14. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  15. Multiple-variable neighbourhood search for the single-machine total weighted tardiness problem

    NASA Astrophysics Data System (ADS)

    Chung, Tsui-Ping; Fu, Qunjie; Liao, Ching-Jong; Liu, Yi-Ting

    2017-07-01

    The single-machine total weighted tardiness (SMTWT) problem is a typical discrete combinatorial optimization problem in the scheduling literature. This problem has been proved to be NP hard and thus provides a challenging area for metaheuristics, especially the variable neighbourhood search algorithm. In this article, a multiple variable neighbourhood search (m-VNS) algorithm with multiple neighbourhood structures is proposed to solve the problem. Special mechanisms named matching and strengthening operations are employed in the algorithm, which has an auto-revising local search procedure to explore the solution space beyond local optimality. Two aspects, searching direction and searching depth, are considered, and neighbourhood structures are systematically exchanged. Experimental results show that the proposed m-VNS algorithm outperforms all the compared algorithms in solving the SMTWT problem.

  16. The functional dissection of the plasma corona of SiO₂-NPs spots histidine rich glycoprotein as a major player able to hamper nanoparticle capture by macrophages.

    PubMed

    Fedeli, Chiara; Segat, Daniela; Tavano, Regina; Bubacco, Luigi; De Franceschi, Giorgia; de Laureto, Patrizia Polverino; Lubian, Elisa; Selvestrel, Francesco; Mancin, Fabrizio; Papini, Emanuele

    2015-11-14

    A coat of strongly-bound host proteins, or hard corona, may influence the biological and pharmacological features of nanotheranostics by altering their cell-interaction selectivity and macrophage clearance. With the goal of identifying specific corona-effectors, we investigated how the capture of amorphous silica nanoparticles (SiO2-NPs; Ø = 26 nm; zeta potential = -18.3 mV) by human lymphocytes, monocytes and macrophages is modulated by the prominent proteins of their plasma corona. LC MS/MS analysis, western blotting and quantitative SDS-PAGE densitometry show that Histidine Rich Glycoprotein (HRG) is the most abundant component of the SiO2-NP hard corona in excess plasma from humans (HP) and mice (MP), together with minor amounts of the homologous Kininogen-1 (Kin-1), while it is remarkably absent in their Foetal Calf Serum (FCS)-derived corona. HRG binds with high affinity to SiO2-NPs (HRG Kd ∼2 nM) and competes with other plasma proteins for the NP surface, so forming a stable and quite homogeneous corona inhibiting nanoparticles binding to the macrophage membrane and their subsequent uptake. Conversely, in the case of lymphocytes and monocytes not only HRG but also several common plasma proteins can interchange in this inhibitory activity. The depletion of HRG and Kin-1 from HP or their plasma exhaustion by increasing NP concentration (>40 μg ml(-1) in 10% HP) lead to a heterogeneous hard corona, mostly formed by fibrinogen (Fibr), HDLs, LDLs, IgGs, Kallikrein and several minor components, allowing nanoparticle binding to macrophages. Consistently, the FCS-derived SiO2-NP hard corona, mainly formed by hemoglobin, α2 macroglobulin and HDLs but lacking HRG, permits nanoparticle uptake by macrophages. Moreover, purified HRG competes with FCS proteins for the NP surface, inhibiting their recruitment in the corona and blocking NP macrophage capture. HRG, the main component of the plasma-derived SiO2-NPs' hard corona, has antiopsonin characteristics and uniquely confers to these particles the ability to evade macrophage capture.

  17. An impatient evolutionary algorithm with probabilistic tabu search for unified solution of some NP-hard problems in graph and set theory via clique finding.

    PubMed

    Guturu, Parthasarathy; Dantu, Ram

    2008-06-01

    Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.

  18. Visualizing Phylogenetic Treespace Using Cartographic Projections

    NASA Astrophysics Data System (ADS)

    Sundberg, Kenneth; Clement, Mark; Snell, Quinn

    Phylogenetic analysis is becoming an increasingly important tool for biological research. Applications include epidemiological studies, drug development, and evolutionary analysis. Phylogenetic search is a known NP-Hard problem. The size of the data sets which can be analyzed is limited by the exponential growth in the number of trees that must be considered as the problem size increases. A better understanding of the problem space could lead to better methods, which in turn could lead to the feasible analysis of more data sets. We present a definition of phylogenetic tree space and a visualization of this space that shows significant exploitable structure. This structure can be used to develop search methods capable of handling much larger datasets.

  19. Integer Linear Programming in Computational Biology

    NASA Astrophysics Data System (ADS)

    Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut

    Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.

  20. Differential geometric treewidth estimation in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Jonckheere, Edmond; Brun, Todd

    2016-10-01

    The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.

  1. Traffic routing for multicomputer networks with virtual cut-through capability

    NASA Technical Reports Server (NTRS)

    Kandlur, Dilip D.; Shin, Kang G.

    1992-01-01

    Consideration is given to the problem of selecting routes for interprocess communication in a network with virtual cut-through capability, while balancing the network load and minimizing the number of times that a message gets buffered. An approach is proposed that formulates the route selection problem as a minimization problem with a link cost function that depends upon the traffic through the link. The form of this cost function is derived using the probability of establishing a virtual cut-through route. The route selection problem is shown to be NP-hard, and an algorithm is developed to incrementally reduce the cost by rerouting the traffic. The performance of this algorithm is exemplified by two network topologies: the hypercube and the C-wrapped hexagonal mesh.

  2. Robust Algorithms for on Minor-Free Graphs Based on the Sherali-Adams Hierarchy

    NASA Astrophysics Data System (ADS)

    Magen, Avner; Moharrami, Mohammad

    This work provides a Linear Programming-based Polynomial Time Approximation Scheme (PTAS) for two classical NP-hard problems on graphs when the input graph is guaranteed to be planar, or more generally Minor Free. The algorithm applies a sufficiently large number (some function of when approximation is required) of rounds of the so-called Sherali-Adams Lift-and-Project system. needed to obtain a -approximation, where f is some function that depends only on the graph that should be avoided as a minor. The problem we discuss are the well-studied problems, the and problems. An curious fact we expose is that in the world of minor-free graph, the is harder in some sense than the.

  3. Maximum likelihood of phylogenetic networks.

    PubMed

    Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir

    2006-11-01

    Horizontal gene transfer (HGT) is believed to be ubiquitous among bacteria, and plays a major role in their genome diversification as well as their ability to develop resistance to antibiotics. In light of its evolutionary significance and implications for human health, developing accurate and efficient methods for detecting and reconstructing HGT is imperative. In this article we provide a new HGT-oriented likelihood framework for many problems that involve phylogeny-based HGT detection and reconstruction. Beside the formulation of various likelihood criteria, we show that most of these problems are NP-hard, and offer heuristics for efficient and accurate reconstruction of HGT under these criteria. We implemented our heuristics and used them to analyze biological as well as synthetic data. In both cases, our criteria and heuristics exhibited very good performance with respect to identifying the correct number of HGT events as well as inferring their correct location on the species tree. Implementation of the criteria as well as heuristics and hardness proofs are available from the authors upon request. Hardness proofs can also be downloaded at http://www.cs.tau.ac.il/~tamirtul/MLNET/Supp-ML.pdf

  4. Protein local structure alignment under the discrete Fréchet distance.

    PubMed

    Zhu, Binhai

    2007-12-01

    Protein structure alignment is a fundamental problem in computational and structural biology. While there has been lots of experimental/heuristic methods and empirical results, very few results are known regarding the algorithmic/complexity aspects of the problem, especially on protein local structure alignment. A well-known measure to characterize the similarity of two polygonal chains is the famous Fréchet distance, and with the application of protein-related research, a related discrete Fréchet distance has been used recently. In this paper, following the recent work of Jiang et al. we investigate the protein local structural alignment problem using bounded discrete Fréchet distance. Given m proteins (or protein backbones, which are 3D polygonal chains), each of length O(n), our main results are summarized as follows: * If the number of proteins, m, is not part of the input, then the problem is NP-complete; moreover, under bounded discrete Fréchet distance it is NP-hard to approximate the maximum size common local structure within a factor of n(1-epsilon). These results hold both when all the proteins are static and when translation/rotation are allowed. * If the number of proteins, m, is a constant, then there is a polynomial time solution for the problem.

  5. Asymptotic analysis of online algorithms and improved scheme for the flow shop scheduling problem with release dates

    NASA Astrophysics Data System (ADS)

    Bai, Danyu

    2015-08-01

    This paper discusses the flow shop scheduling problem to minimise the total quadratic completion time (TQCT) with release dates in offline and online environments. For this NP-hard problem, the investigation is focused on the performance of two online algorithms based on the Shortest Processing Time among Available jobs rule. Theoretical results indicate the asymptotic optimality of the algorithms as the problem scale is sufficiently large. To further enhance the quality of the original solutions, the improvement scheme is provided for these algorithms. A new lower bound with performance guarantee is provided, and computational experiments show the effectiveness of these heuristics. Moreover, several results of the single-machine TQCT problem with release dates are also obtained for the deduction of the main theorem.

  6. Immune allied genetic algorithm for Bayesian network structure learning

    NASA Astrophysics Data System (ADS)

    Song, Qin; Lin, Feng; Sun, Wei; Chang, KC

    2012-06-01

    Bayesian network (BN) structure learning is a NP-hard problem. In this paper, we present an improved approach to enhance efficiency of BN structure learning. To avoid premature convergence in traditional single-group genetic algorithm (GA), we propose an immune allied genetic algorithm (IAGA) in which the multiple-population and allied strategy are introduced. Moreover, in the algorithm, we apply prior knowledge by injecting immune operator to individuals which can effectively prevent degeneration. To illustrate the effectiveness of the proposed technique, we present some experimental results.

  7. Site Partitioning for Redundant Arrays of Distributed Disks

    NASA Technical Reports Server (NTRS)

    Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.

    1996-01-01

    Redundant arrays of distributed disks (RADD) can be used in a distributed computing system or database system to provide recovery in the presence of disk crashes and temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites of a distributed storage system into redundant arrays in such a way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-hard. We then propose and evaluate several heuristic algorithms for finding approximate solutions. Simulation results show that significant reduction in remote parity update costs can be achieved by optimizing the site partitioning scheme.

  8. On Profit-Maximizing Pricing for the Highway and Tollbooth Problems

    NASA Astrophysics Data System (ADS)

    Elbassioni, Khaled; Raman, Rajiv; Ray, Saurabh; Sitters, René

    In the tollbooth problem on trees, we are given a tree T= (V,E) with n edges, and a set of m customers, each of whom is interested in purchasing a path on the graph. Each customer has a fixed budget, and the objective is to price the edges of T such that the total revenue made by selling the paths to the customers that can afford them is maximized. An important special case of this problem, known as the highway problem, is when T is restricted to be a line. For the tollbooth problem, we present an O(logn)-approximation, improving on the current best O(logm)-approximation. We also study a special case of the tollbooth problem, when all the paths that customers are interested in purchasing go towards a fixed root of T. In this case, we present an algorithm that returns a (1 - ɛ)-approximation, for any ɛ> 0, and runs in quasi-polynomial time. On the other hand, we rule out the existence of an FPTAS by showing that even for the line case, the problem is strongly NP-hard. Finally, we show that in the discount model, when we allow some items to be priced below zero to improve the overall profit, the problem becomes even APX-hard.

  9. Smell Detection Agent Based Optimization Algorithm

    NASA Astrophysics Data System (ADS)

    Vinod Chandra, S. S.

    2016-09-01

    In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.

  10. Complex network problems in physics, computer science and biology

    NASA Astrophysics Data System (ADS)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe lattice at zero temperature and then we apply this formalism to the K-SAT problem defined in Chapter 1. The phase transition which physicists study often corresponds to a change in the computational complexity of the corresponding computer science problem. Chapter 3 presents phase transitions which are specific to the problems discussed in Chapter 1 and also known results for the K-SAT problem. We discuss the replica method and experimental evidences of replica symmetry breaking. The physics approach to hard problems is based on replica methods which are difficult to understand. In Chapter 4 we develop novel methods for studying hard problems using methods similar to the message passing techniques that were discussed in Chapter 2. Although we concentrated on the symmetric case, cavity methods show promise for generalizing our methods to the un-symmetric case. As has been highlighted by John Hopfield, several key features of biological systems are not shared by physical systems. Although living entities follow the laws of physics and chemistry, the fact that organisms adapt and reproduce introduces an essential ingredient that is missing in the physical sciences. In order to extract information from networks many algorithm have been developed. In Chapter 5 we apply polynomial algorithms like minimum spanning tree in order to study and construct gene regulatory networks from experimental data. As future work we propose the use of algorithms like min-cut/max-flow and Dijkstra for understanding key properties of these networks.

  11. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  12. A Benders based rolling horizon algorithm for a dynamic facility location problem

    DOE PAGES

    Marufuzzaman,, Mohammad; Gedik, Ridvan; Roni, Mohammad S.

    2016-06-28

    This study presents a well-known capacitated dynamic facility location problem (DFLP) that satisfies the customer demand at a minimum cost by determining the time period for opening, closing, or retaining an existing facility in a given location. To solve this challenging NP-hard problem, this paper develops a unique hybrid solution algorithm that combines a rolling horizon algorithm with an accelerated Benders decomposition algorithm. Extensive computational experiments are performed on benchmark test instances to evaluate the hybrid algorithm’s efficiency and robustness in solving the DFLP problem. Computational results indicate that the hybrid Benders based rolling horizon algorithm consistently offers high qualitymore » feasible solutions in a much shorter computational time period than the standalone rolling horizon and accelerated Benders decomposition algorithms in the experimental range.« less

  13. Statistical physics of hard combinatorial optimization: Vertex cover problem

    NASA Astrophysics Data System (ADS)

    Zhao, Jin-Hua; Zhou, Hai-Jun

    2014-07-01

    Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.

  14. Efficient bounding schemes for the two-center hybrid flow shop scheduling problem with removal times.

    PubMed

    Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly

    2014-01-01

    We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.

  15. Efficient Bounding Schemes for the Two-Center Hybrid Flow Shop Scheduling Problem with Removal Times

    PubMed Central

    2014-01-01

    We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures. PMID:25610911

  16. Open shop scheduling problem to minimize total weighted completion time

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian

    2017-01-01

    A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.

  17. Polynomial-Time Approximation Algorithm for the Problem of Cardinality-Weighted Variance-Based 2-Clustering with a Given Center

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Motkova, A. V.

    2018-01-01

    A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.

  18. On the use of cartographic projections in visualizing phylo-genetic tree space

    PubMed Central

    2010-01-01

    Phylogenetic analysis is becoming an increasingly important tool for biological research. Applications include epidemiological studies, drug development, and evolutionary analysis. Phylogenetic search is a known NP-Hard problem. The size of the data sets which can be analyzed is limited by the exponential growth in the number of trees that must be considered as the problem size increases. A better understanding of the problem space could lead to better methods, which in turn could lead to the feasible analysis of more data sets. We present a definition of phylogenetic tree space and a visualization of this space that shows significant exploitable structure. This structure can be used to develop search methods capable of handling much larger data sets. PMID:20529355

  19. Extended Islands of Tractability for Parsimony Haplotyping

    NASA Astrophysics Data System (ADS)

    Fleischer, Rudolf; Guo, Jiong; Niedermeier, Rolf; Uhlmann, Johannes; Wang, Yihui; Weller, Mathias; Wu, Xi

    Parsimony haplotyping is the problem of finding a smallest size set of haplotypes that can explain a given set of genotypes. The problem is NP-hard, and many heuristic and approximation algorithms as well as polynomial-time solvable special cases have been discovered. We propose improved fixed-parameter tractability results with respect to the parameter "size of the target haplotype set" k by presenting an O *(k 4k )-time algorithm. This also applies to the practically important constrained case, where we can only use haplotypes from a given set. Furthermore, we show that the problem becomes polynomial-time solvable if the given set of genotypes is complete, i.e., contains all possible genotypes that can be explained by the set of haplotypes.

  20. On computations of variance, covariance and correlation for interval data

    NASA Astrophysics Data System (ADS)

    Kishida, Masako

    2017-02-01

    In many practical situations, the data on which statistical analysis is to be performed is only known with interval uncertainty. Different combinations of values from the interval data usually lead to different values of variance, covariance, and correlation. Hence, it is desirable to compute the endpoints of possible values of these statistics. This problem is, however, NP-hard in general. This paper shows that the problem of computing the endpoints of possible values of these statistics can be rewritten as the problem of computing skewed structured singular values ν, for which there exist feasible (polynomial-time) algorithms that compute reasonably tight bounds in most practical cases. This allows one to find tight intervals of the aforementioned statistics for interval data.

  1. Computational complexity in entanglement transformations

    NASA Astrophysics Data System (ADS)

    Chitambar, Eric A.

    In physics, systems having three parts are typically much more difficult to analyze than those having just two. Even in classical mechanics, predicting the motion of three interacting celestial bodies remains an insurmountable challenge while the analogous two-body problem has an elementary solution. It is as if just by adding a third party, a fundamental change occurs in the structure of the problem that renders it unsolvable. In this thesis, we demonstrate how such an effect is likewise present in the theory of quantum entanglement. In fact, the complexity differences between two-party and three-party entanglement become quite conspicuous when comparing the difficulty in deciding what state changes are possible for these systems when no additional entanglement is consumed in the transformation process. We examine this entanglement transformation question and its variants in the language of computational complexity theory, a powerful subject that formalizes the concept of problem difficulty. Since deciding feasibility of a specified bipartite transformation is relatively easy, this task belongs to the complexity class P. On the other hand, for tripartite systems, we find the problem to be NP-Hard, meaning that its solution is at least as hard as the solution to some of the most difficult problems humans have encountered. One can then rigorously defend the assertion that a fundamental complexity difference exists between bipartite and tripartite entanglement since unlike the former, the full range of forms realizable by the latter is incalculable (assuming P≠NP). However, similar to the three-body celestial problem, when one examines a special subclass of the problem---invertible transformations on systems having at least one qubit subsystem---we prove that the problem can be solved efficiently. As a hybrid of the two questions, we find that the question of tripartite to bipartite transformations can be solved by an efficient randomized algorithm. Our results are obtained by encoding well-studied computational problems such as polynomial identity testing and tensor rank into questions of entanglement transformation. In this way, entanglement theory provides a physical manifestation of some of the most puzzling and abstract classical computation questions.

  2. Surfactant titration of nanoparticle-protein corona.

    PubMed

    Maiolo, Daniele; Bergese, Paolo; Mahon, Eugene; Dawson, Kenneth A; Monopoli, Marco P

    2014-12-16

    Nanoparticles (NP), when exposed to biological fluids, are coated by specific proteins that form the so-called protein corona. While some adsorbing proteins exchange with the surroundings on a short time scale, described as a "dynamic" corona, others with higher affinity and long-lived interaction with the NP surface form a "hard" corona (HC), which is believed to mediate NP interaction with cellular machineries. In-depth NP protein corona characterization is therefore a necessary step in understanding the relationship between surface layer structure and biological outcomes. In the present work, we evaluate the protein composition and stability over time and we systematically challenge the formed complexes with surfactants. Each challenge is characterized through different physicochemical measurements (dynamic light scattering, ζ-potential, and differential centrifugal sedimentation) alongside proteomic evaluation in titration type experiments (surfactant titration). 100 nm silicon oxide (Si) and 100 nm carboxylated polystyrene (PS-COOH) NPs cloaked by human plasma HC were titrated with 3-[(3-Cholamidopropyl) dimethylammonio]-1-propanesulfonate (CHAPS, zwitterionic), Triton X-100 (nonionic), sodium dodecyl sulfate (SDS, anionic), and dodecyltrimethylammonium bromide (DTAB, cationic) surfactants. Composition and density of HC together with size and ζ-potential of NP-HC complexes were tracked at each step after surfactant titration. Results on Si NP-HC complexes showed that SDS removes most of the HC, while DTAB induces NP agglomeration. Analogous results were obtained for PS NP-HC complexes. Interestingly, CHAPS and Triton X-100, thanks to similar surface binding preferences, enable selective extraction of apolipoprotein AI (ApoAI) from Si NP hard coronas, leaving unaltered the dispersion physicochemical properties. These findings indicate that surfactant titration can enable the study of NP-HC stability through surfactant variation and also selective separation of certain proteins from the HC. This approach thus has an immediate analytical value as well as potential applications in HC engineering.

  3. A hybrid genetic-simulated annealing algorithm for the location-inventory-routing problem considering returns under e-supply chain environment.

    PubMed

    Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.

  4. Interval versions of statistical techniques with applications to environmental analysis, bioinformatics, and privacy in statistical databases

    NASA Astrophysics Data System (ADS)

    Kreinovich, Vladik; Longpre, Luc; Starks, Scott A.; Xiang, Gang; Beck, Jan; Kandathi, Raj; Nayak, Asis; Ferson, Scott; Hajagos, Janos

    2007-02-01

    In many areas of science and engineering, it is desirable to estimate statistical characteristics (mean, variance, covariance, etc.) under interval uncertainty. For example, we may want to use the measured values x(t) of a pollution level in a lake at different moments of time to estimate the average pollution level; however, we do not know the exact values x(t)--e.g., if one of the measurement results is 0, this simply means that the actual (unknown) value of x(t) can be anywhere between 0 and the detection limit (DL). We must, therefore, modify the existing statistical algorithms to process such interval data. Such a modification is also necessary to process data from statistical databases, where, in order to maintain privacy, we only keep interval ranges instead of the actual numeric data (e.g., a salary range instead of the actual salary). Most resulting computational problems are NP-hard--which means, crudely speaking, that in general, no computationally efficient algorithm can solve all particular cases of the corresponding problem. In this paper, we overview practical situations in which computationally efficient algorithms exist: e.g., situations when measurements are very accurate, or when all the measurements are done with one (or few) instruments. As a case study, we consider a practical problem from bioinformatics: to discover the genetic difference between the cancer cells and the healthy cells, we must process the measurements results and find the concentrations c and h of a given gene in cancer and in healthy cells. This is a particular case of a general situation in which, to estimate states or parameters which are not directly accessible by measurements, we must solve a system of equations in which coefficients are only known with interval uncertainty. We show that in general, this problem is NP-hard, and we describe new efficient algorithms for solving this problem in practically important situations.

  5. Statistical mechanics of the vertex-cover problem

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2003-10-01

    We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c < e, where the VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c > e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.

  6. Coarse grained modeling of directed assembly to form functional nanoporous films

    NASA Astrophysics Data System (ADS)

    Al Khatib, Amir

    A coarse-grained (CG) simulation of polyethylene glycol (PEG) and Polymethylsilsesquixane nanoparticle (PMSSQ) referred to as (NP) at different sizes and concentrations were done using the Martini coarse-grained (CG) force field. The interactions between CG PEG and CG NP were parameterized from the chemical compound of each molecule and based on Martini force field. NP particles migrates to the surface of the substrate in an agreement with the experimental output at high temperature of 800K. This demonstration of nanoparticles-polymer film to direct it to self-assemble a systematically spatial pattern using the substrate surface energy as the key gating parameter. Validation of the model comparing molecular dynamics simulations with experimental data collected from previous study. NP interaction with the substrate at low interactions energy using Lennard-Johns potential were able to direct the NP to self-assemble in a hexagonal shape up to 4 layers above the substrate. This thesis established that substrate surface energy is a key gating parameter to direct the collective behavior of functional nanoparticles to form thin nanoporous films with spatially predetermined optical/dielectric constants.

  7. Accurate detection of hierarchical communities in complex networks based on nonlinear dynamical evolution

    NASA Astrophysics Data System (ADS)

    Zhuo, Zhao; Cai, Shi-Min; Tang, Ming; Lai, Ying-Cheng

    2018-04-01

    One of the most challenging problems in network science is to accurately detect communities at distinct hierarchical scales. Most existing methods are based on structural analysis and manipulation, which are NP-hard. We articulate an alternative, dynamical evolution-based approach to the problem. The basic principle is to computationally implement a nonlinear dynamical process on all nodes in the network with a general coupling scheme, creating a networked dynamical system. Under a proper system setting and with an adjustable control parameter, the community structure of the network would "come out" or emerge naturally from the dynamical evolution of the system. As the control parameter is systematically varied, the community hierarchies at different scales can be revealed. As a concrete example of this general principle, we exploit clustered synchronization as a dynamical mechanism through which the hierarchical community structure can be uncovered. In particular, for quite arbitrary choices of the nonlinear nodal dynamics and coupling scheme, decreasing the coupling parameter from the global synchronization regime, in which the dynamical states of all nodes are perfectly synchronized, can lead to a weaker type of synchronization organized as clusters. We demonstrate the existence of optimal choices of the coupling parameter for which the synchronization clusters encode accurate information about the hierarchical community structure of the network. We test and validate our method using a standard class of benchmark modular networks with two distinct hierarchies of communities and a number of empirical networks arising from the real world. Our method is computationally extremely efficient, eliminating completely the NP-hard difficulty associated with previous methods. The basic principle of exploiting dynamical evolution to uncover hidden community organizations at different scales represents a "game-change" type of approach to addressing the problem of community detection in complex networks.

  8. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  9. Pathgroups, a dynamic data structure for genome reconstruction problems.

    PubMed

    Zheng, Chunfang

    2010-07-01

    Ancestral gene order reconstruction problems, including the median problem, quartet construction, small phylogeny, guided genome halving and genome aliquoting, are NP hard. Available heuristics dedicated to each of these problems are computationally costly for even small instances. We present a data structure enabling rapid heuristic solution to all these ancestral genome reconstruction problems. A generic greedy algorithm with look-ahead based on an automatically generated priority system suffices for all the problems using this data structure. The efficiency of the algorithm is due to fast updating of the structure during run time and to the simplicity of the priority scheme. We illustrate with the first rapid algorithm for quartet construction and apply this to a set of yeast genomes to corroborate a recent gene sequence-based phylogeny. http://albuquerque.bioinformatics.uottawa.ca/pathgroup/Quartet.html chunfang313@gmail.com Supplementary data are available at Bioinformatics online.

  10. The Ordered Clustered Travelling Salesman Problem: A Hybrid Genetic Algorithm

    PubMed Central

    Ahmed, Zakir Hussain

    2014-01-01

    The ordered clustered travelling salesman problem is a variation of the usual travelling salesman problem in which a set of vertices (except the starting vertex) of the network is divided into some prespecified clusters. The objective is to find the least cost Hamiltonian tour in which vertices of any cluster are visited contiguously and the clusters are visited in the prespecified order. The problem is NP-hard, and it arises in practical transportation and sequencing problems. This paper develops a hybrid genetic algorithm using sequential constructive crossover, 2-opt search, and a local search for obtaining heuristic solution to the problem. The efficiency of the algorithm has been examined against two existing algorithms for some asymmetric and symmetric TSPLIB instances of various sizes. The computational results show that the proposed algorithm is very effective in terms of solution quality and computational time. Finally, we present solution to some more symmetric TSPLIB instances. PMID:24701148

  11. Projected power iteration for network alignment

    NASA Astrophysics Data System (ADS)

    Onaran, Efe; Villar, Soledad

    2017-08-01

    The network alignment problem asks for the best correspondence between two given graphs, so that the largest possible number of edges are matched. This problem appears in many scientific problems (like the study of protein-protein interactions) and it is very closely related to the quadratic assignment problem which has graph isomorphism, traveling salesman and minimum bisection problems as particular cases. The graph matching problem is NP-hard in general. However, under some restrictive models for the graphs, algorithms can approximate the alignment efficiently. In that spirit the recent work by Feizi and collaborators introduce EigenAlign, a fast spectral method with convergence guarantees for Erd-s-Renyí graphs. In this work we propose the algorithm Projected Power Alignment, which is a projected power iteration version of EigenAlign. We numerically show it improves the recovery rates of EigenAlign and we describe the theory that may be used to provide performance guarantees for Projected Power Alignment.

  12. Gaussian Mean Field Lattice Gas

    NASA Astrophysics Data System (ADS)

    Scoppola, Benedetto; Troiani, Alessio

    2018-03-01

    We study rigorously a lattice gas version of the Sherrington-Kirckpatrick spin glass model. In discrete optimization literature this problem is known as unconstrained binary quadratic programming and it belongs to the class NP-hard. We prove that the fluctuations of the ground state energy tend to vanish in the thermodynamic limit, and we give a lower bound of such ground state energy. Then we present a heuristic algorithm, based on a probabilistic cellular automaton, which seems to be able to find configurations with energy very close to the minimum, even for quite large instances.

  13. Qutrit witness from the Grothendieck constant of order four

    NASA Astrophysics Data System (ADS)

    Diviánszky, Péter; Bene, Erika; Vértesi, Tamás

    2017-07-01

    In this paper, we prove that KG(3 )

  14. Analog Approach to Constraint Satisfaction Enabled by Spin Orbit Torque Magnetic Tunnel Junctions.

    PubMed

    Wijesinghe, Parami; Liyanagedera, Chamika; Roy, Kaushik

    2018-05-02

    Boolean satisfiability (k-SAT) is an NP-complete (k ≥ 3) problem that constitute one of the hardest classes of constraint satisfaction problems. In this work, we provide a proof of concept hardware based analog k-SAT solver, that is built using Magnetic Tunnel Junctions (MTJs). The inherent physics of MTJs, enhanced by device level modifications, is harnessed here to emulate the intricate dynamics of an analog satisfiability (SAT) solver. In the presence of thermal noise, the MTJ based system can successfully solve Boolean satisfiability problems. Most importantly, our results exhibit that, the proposed MTJ based hardware SAT solver is capable of finding a solution to a significant fraction (at least 85%) of hard 3-SAT problems, within a time that has a polynomial relationship with the number of variables(<50).

  15. Weighted LCS

    NASA Astrophysics Data System (ADS)

    Amir, Amihood; Gotthilf, Zvi; Shalom, B. Riva

    The Longest Common Subsequence (LCS) of two strings A and B is a well studied problem having a wide range of applications. When each symbol of the input strings is assigned a positive weight the problem becomes the Heaviest Common Subsequence (HCS) problem. In this paper we consider a different version of weighted LCS on Position Weight Matrices (PWM). The Position Weight Matrix was introduced as a tool to handle a set of sequences that are not identical, yet, have many local similarities. Such a weighted sequence is a 'statistical image' of this set where we are given the probability of every symbol's occurrence at every text location. We consider two possible definitions of LCS on PWM. For the first, we solve the weighted LCS problem of z sequences in time O(zn z + 1). For the second, we prove \\cal{NP}-hardness and provide an approximation algorithm.

  16. Diameter-Constrained Steiner Tree

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Lin, Guohui; Xue, Guoliang

    Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.

  17. A computational analysis of lower bounds for the economic lot sizing problem in remanufacturing with separate setups

    NASA Astrophysics Data System (ADS)

    Aishah Syed Ali, Sharifah

    2017-09-01

    This paper considers economic lot sizing problem in remanufacturing with separate setup (ELSRs), where remanufactured and new products are produced on dedicated production lines. Since this problem is NP-hard in general, which leads to computationally inefficient and low-quality of solutions, we present (a) a multicommodity formulation and (b) a strengthened formulation based on a priori addition of valid inequalities in the space of original variables, which are then compared with the Wagner-Whitin based formulation available in the literature. Computational experiments on a large number of test data sets are performed to evaluate the different approaches. The numerical results show that our strengthened formulation outperforms all the other tested approaches in terms of linear relaxation bounds. Finally, we conclude with future research directions.

  18. An Integrated Method Based on PSO and EDA for the Max-Cut Problem.

    PubMed

    Lin, Geng; Guan, Jian

    2016-01-01

    The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.

  19. An ILP based memetic algorithm for finding minimum positive influence dominating sets in social networks

    NASA Astrophysics Data System (ADS)

    Lin, Geng; Guan, Jian; Feng, Huibin

    2018-06-01

    The positive influence dominating set problem is a variant of the minimum dominating set problem, and has lots of applications in social networks. It is NP-hard, and receives more and more attention. Various methods have been proposed to solve the positive influence dominating set problem. However, most of the existing work focused on greedy algorithms, and the solution quality needs to be improved. In this paper, we formulate the minimum positive influence dominating set problem as an integer linear programming (ILP), and propose an ILP based memetic algorithm (ILPMA) for solving the problem. The ILPMA integrates a greedy randomized adaptive construction procedure, a crossover operator, a repair operator, and a tabu search procedure. The performance of ILPMA is validated on nine real-world social networks with nodes up to 36,692. The results show that ILPMA significantly improves the solution quality, and is robust.

  20. A Hybrid Genetic-Simulated Annealing Algorithm for the Location-Inventory-Routing Problem Considering Returns under E-Supply Chain Environment

    PubMed Central

    Guo, Hao; Fu, Jing

    2013-01-01

    Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489

  1. Honey bee-inspired algorithms for SNP haplotype reconstruction problem

    NASA Astrophysics Data System (ADS)

    PourkamaliAnaraki, Maryam; Sadeghi, Mehdi

    2016-03-01

    Reconstructing haplotypes from SNP fragments is an important problem in computational biology. There have been a lot of interests in this field because haplotypes have been shown to contain promising data for disease association research. It is proved that haplotype reconstruction in Minimum Error Correction model is an NP-hard problem. Therefore, several methods such as clustering techniques, evolutionary algorithms, neural networks and swarm intelligence approaches have been proposed in order to solve this problem in appropriate time. In this paper, we have focused on various evolutionary clustering techniques and try to find an efficient technique for solving haplotype reconstruction problem. It can be referred from our experiments that the clustering methods relying on the behaviour of honey bee colony in nature, specifically bees algorithm and artificial bee colony methods, are expected to result in more efficient solutions. An application program of the methods is available at the following link. http://www.bioinf.cs.ipm.ir/software/haprs/

  2. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator

    PubMed Central

    Mohamd Shoukry, Alaa; Gani, Showkat

    2017-01-01

    Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements. PMID:29209364

  3. Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator.

    PubMed

    Hussain, Abid; Muhammad, Yousaf Shad; Nauman Sajid, M; Hussain, Ijaz; Mohamd Shoukry, Alaa; Gani, Showkat

    2017-01-01

    Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements.

  4. Solving optimization problems by the public goods game

    NASA Astrophysics Data System (ADS)

    Javarone, Marco Alberto

    2017-09-01

    We introduce a method based on the Public Goods Game for solving optimization tasks. In particular, we focus on the Traveling Salesman Problem, i.e. a NP-hard problem whose search space exponentially grows increasing the number of cities. The proposed method considers a population whose agents are provided with a random solution to the given problem. In doing so, agents interact by playing the Public Goods Game using the fitness of their solution as currency of the game. Notably, agents with better solutions provide higher contributions, while those with lower ones tend to imitate the solution of richer agents for increasing their fitness. Numerical simulations show that the proposed method allows to compute exact solutions, and suboptimal ones, in the considered search spaces. As result, beyond to propose a new heuristic for combinatorial optimization problems, our work aims to highlight the potentiality of evolutionary game theory beyond its current horizons.

  5. Complexity and approximability for a problem of intersecting of proximity graphs with minimum number of equal disks

    NASA Astrophysics Data System (ADS)

    Kobylkin, Konstantin

    2016-10-01

    Computational complexity and approximability are studied for the problem of intersecting of a set of straight line segments with the smallest cardinality set of disks of fixed radii r > 0 where the set of segments forms straight line embedding of possibly non-planar geometric graph. This problem arises in physical network security analysis for telecommunication, wireless and road networks represented by specific geometric graphs defined by Euclidean distances between their vertices (proximity graphs). It can be formulated in a form of known Hitting Set problem over a set of Euclidean r-neighbourhoods of segments. Being of interest computational complexity and approximability of Hitting Set over so structured sets of geometric objects did not get much focus in the literature. Strong NP-hardness of the problem is reported over special classes of proximity graphs namely of Delaunay triangulations, some of their connected subgraphs, half-θ6 graphs and non-planar unit disk graphs as well as APX-hardness is given for non-planar geometric graphs at different scales of r with respect to the longest graph edge length. Simple constant factor approximation algorithm is presented for the case where r is at the same scale as the longest edge length.

  6. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  7. Spatially selective Au nanoparticle growth in laser-quality glass controlled by UV-induced phosphate-chain cross-linkage

    NASA Astrophysics Data System (ADS)

    Sigaev, Vladimir N.; Savinkov, Vitaly I.; Lotarev, Sergey V.; Shakhgildyan, Georgiy Yu; Lorenzi, Roberto; Paleari, Alberto

    2013-06-01

    Herein we describe how UV excitation of localized electronic states in phosphate glasses can activate structural rearrangements that influence the kinetics of Au nanoparticle (NP) thermal growth in Au-doped glass. The results suggest a novel strategy to address the problem of controlling nano-assembly processes of metal NP patterns in fully inorganic and chemically stable hard materials, such as laser-quality glasses. We show that the mechanism is promoted by opening and subsequent cross-linkage of phosphate chains under UV excitation of non-bridging groups in the amorphous network of the glass, with a consequent modification of Au diffusion and metal NP growth. Importantly, the micro-Raman mapping of the UV-induced modifications demonstrates that the process is restricted within the beam waist region of the focused UV laser beam. This fact is consistent with the need for more than one excitation event, close in time and in space, in order to promote structural cross-linkage and Au diffusion confinement. The stability of the photo-induced modifications makes it possible to design new metal patterning approaches for the fabrication of three-dimensional metal structures in laser-quality materials for high-power nonlinear applications.

  8. Spatially selective Au nanoparticle growth in laser-quality glass controlled by UV-induced phosphate-chain cross-linkage.

    PubMed

    Sigaev, Vladimir N; Savinkov, Vitaly I; Lotarev, Sergey V; Shakhgildyan, Georgiy Yu; Lorenzi, Roberto; Paleari, Alberto

    2013-06-07

    Herein we describe how UV excitation of localized electronic states in phosphate glasses can activate structural rearrangements that influence the kinetics of Au nanoparticle (NP) thermal growth in Au-doped glass. The results suggest a novel strategy to address the problem of controlling nano-assembly processes of metal NP patterns in fully inorganic and chemically stable hard materials, such as laser-quality glasses. We show that the mechanism is promoted by opening and subsequent cross-linkage of phosphate chains under UV excitation of non-bridging groups in the amorphous network of the glass, with a consequent modification of Au diffusion and metal NP growth. Importantly, the micro-Raman mapping of the UV-induced modifications demonstrates that the process is restricted within the beam waist region of the focused UV laser beam. This fact is consistent with the need for more than one excitation event, close in time and in space, in order to promote structural cross-linkage and Au diffusion confinement. The stability of the photo-induced modifications makes it possible to design new metal patterning approaches for the fabrication of three-dimensional metal structures in laser-quality materials for high-power nonlinear applications.

  9. A linear programming approach to max-sum problem: a review.

    PubMed

    Werner, Tomás

    2007-07-01

    The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.

  10. An iterative method for tri-level quadratic fractional programming problems using fuzzy goal programming approach

    NASA Astrophysics Data System (ADS)

    Kassa, Semu Mitiku; Tsegay, Teklay Hailay

    2017-08-01

    Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.

  11. Robust optimization with transiently chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Sumi, R.; Molnár, B.; Ercsey-Ravasz, M.

    2014-05-01

    Efficiently solving hard optimization problems has been a strong motivation for progress in analog computing. In a recent study we presented a continuous-time dynamical system for solving the NP-complete Boolean satisfiability (SAT) problem, with a one-to-one correspondence between its stable attractors and the SAT solutions. While physical implementations could offer great efficiency, the transiently chaotic dynamics raises the question of operability in the presence of noise, unavoidable on analog devices. Here we show that the probability of finding solutions is robust to noise intensities well above those present on real hardware. We also developed a cellular neural network model realizable with analog circuits, which tolerates even larger noise intensities. These methods represent an opportunity for robust and efficient physical implementations.

  12. Development a heuristic method to locate and allocate the medical centers to minimize the earthquake relief operation time.

    PubMed

    Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan

    2013-01-01

    Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.

  13. Artificial immune algorithm for multi-depot vehicle scheduling problems

    NASA Astrophysics Data System (ADS)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  14. A local search for a graph clustering problem

    NASA Astrophysics Data System (ADS)

    Navrotskaya, Anna; Il'ev, Victor

    2016-10-01

    In the clustering problems one has to partition a given set of objects (a data set) into some subsets (called clusters) taking into consideration only similarity of the objects. One of most visual formalizations of clustering is graph clustering, that is grouping the vertices of a graph into clusters taking into consideration the edge structure of the graph whose vertices are objects and edges represent similarities between the objects. In the graph k-clustering problem the number of clusters does not exceed k and the goal is to minimize the number of edges between clusters and the number of missing edges within clusters. This problem is NP-hard for any k ≥ 2. We propose a polynomial time (2k-1)-approximation algorithm for graph k-clustering. Then we apply a local search procedure to the feasible solution found by this algorithm and hold experimental research of obtained heuristics.

  15. Target Coverage in Wireless Sensor Networks with Probabilistic Sensors

    PubMed Central

    Shan, Anxing; Xu, Xianghua; Cheng, Zongmao

    2016-01-01

    Sensing coverage is a fundamental problem in wireless sensor networks (WSNs), which has attracted considerable attention. Conventional research on this topic focuses on the 0/1 coverage model, which is only a coarse approximation to the practical sensing model. In this paper, we study the target coverage problem, where the objective is to find the least number of sensor nodes in randomly-deployed WSNs based on the probabilistic sensing model. We analyze the joint detection probability of target with multiple sensors. Based on the theoretical analysis of the detection probability, we formulate the minimum ϵ-detection coverage problem. We prove that the minimum ϵ-detection coverage problem is NP-hard and present an approximation algorithm called the Probabilistic Sensor Coverage Algorithm (PSCA) with provable approximation ratios. To evaluate our design, we analyze the performance of PSCA theoretically and also perform extensive simulations to demonstrate the effectiveness of our proposed algorithm. PMID:27618902

  16. A constraint optimization based virtual network mapping method

    NASA Astrophysics Data System (ADS)

    Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen

    2013-03-01

    Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.

  17. Restarting and recentering genetic algorithm variations for DNA fragment assembly: The necessity of a multi-strategy approach.

    PubMed

    Hughes, James Alexander; Houghten, Sheridan; Ashlock, Daniel

    2016-12-01

    DNA Fragment assembly - an NP-Hard problem - is one of the major steps in of DNA sequencing. Multiple strategies have been used for this problem, including greedy graph-based algorithms, deBruijn graphs, and the overlap-layout-consensus approach. This study focuses on the overlap-layout-consensus approach. Heuristics and computational intelligence methods are combined to exploit their respective benefits. These algorithm combinations were able to produce high quality results surpassing the best results obtained by a number of competitive algorithms specially designed and tuned for this problem on thirteen of sixteen popular benchmarks. This work also reinforces the necessity of using multiple search strategies as it is clearly observed that algorithm performance is dependent on problem instance; without a deeper look into many searches, top solutions could be missed entirely. Copyright © 2016. Published by Elsevier Ireland Ltd.

  18. Solving Set Cover with Pairs Problem using Quantum Annealing

    NASA Astrophysics Data System (ADS)

    Cao, Yudong; Jiang, Shuxian; Perouli, Debbie; Kais, Sabre

    2016-09-01

    Here we consider using quantum annealing to solve Set Cover with Pairs (SCP), an NP-hard combinatorial optimization problem that plays an important role in networking, computational biology, and biochemistry. We show an explicit construction of Ising Hamiltonians whose ground states encode the solution of SCP instances. We numerically simulate the time-dependent Schrödinger equation in order to test the performance of quantum annealing for random instances and compare with that of simulated annealing. We also discuss explicit embedding strategies for realizing our Hamiltonian construction on the D-wave type restricted Ising Hamiltonian based on Chimera graphs. Our embedding on the Chimera graph preserves the structure of the original SCP instance and in particular, the embedding for general complete bipartite graphs and logical disjunctions may be of broader use than that the specific problem we deal with.

  19. Influence of hardness on the bioavailability of silver to a freshwater snail after waterborne exposure to silver nitrate and silver nanoparticles.

    PubMed

    Stoiber, Tasha; Croteau, Marie-Noële; Römer, Isabella; Tejamaya, Mila; Lead, Jamie R; Luoma, Samuel N

    2015-01-01

    The release of Ag nanoparticles (AgNPs) into the aquatic environment is likely, but the influence of water chemistry on their impacts and fate remains unclear. Here, we characterize the bioavailability of Ag from AgNO(3) and from AgNPs capped with polyvinylpyrrolidone (PVP AgNP) and thiolated polyethylene glycol (PEG AgNP) in the freshwater snail, Lymnaea stagnalis, after short waterborne exposures. Results showed that water hardness, AgNP capping agents, and metal speciation affected the uptake rate of Ag from AgNPs. Comparison of the results from organisms of similar weight showed that water hardness affected the uptake of Ag from AgNPs, but not that from AgNO(3). Transformation (dissolution and aggregation) of the AgNPs was also influenced by water hardness and the capping agent. Bioavailability of Ag from AgNPs was, in turn, correlated to these physical changes. Water hardness increased the aggregation of AgNPs, especially for PEG AgNPs, reducing the bioavailability of Ag from PEG AgNPs to a greater degree than from PVP AgNPs. Higher dissolved Ag concentrations were measured for the PVP AgNPs (15%) compared to PEG AgNPs (3%) in moderately hard water, enhancing Ag bioavailability of the former. Multiple drivers of bioavailability yielded differences in Ag influx between very hard and deionized water where the uptake rate constants (k(uw), l g(-1) d(-1) ± SE) varied from 3.1 ± 0.7 to 0.2 ± 0.01 for PEG AgNPs and from 2.3 ± 0.02 to 1.3 ± 0.01 for PVP AgNPs. Modeling bioavailability of Ag from NPs revealed that Ag influx into L. stagnalis comprised uptake from the NPs themselves and from newly dissolved Ag.

  20. Influence of hardness on the bioavailability of silver to a freshwater snail after waterborne exposure to silver nitrate and silver nanoparticles

    USGS Publications Warehouse

    Stoiber, Tasha L.; Croteau, Marie-Noele; Romer, Isabella; Tejamaya, Mila; Lead, Jamie R.; Luoma, Samuel N.

    2015-01-01

    The release of Ag nanoparticles (AgNPs) into the aquatic environment is likely, but the influence of water chemistry on their impacts and fate remains unclear. Here, we characterize the bioavailability of Ag from AgNO3 and from AgNPs capped with polyvinylpyrrolidone (PVP AgNP) and thiolated polyethylene glycol (PEG AgNP) in the freshwater snail, Lymnaea stagnalis, after short waterborne exposures. Results showed that water hardness, AgNP capping agents, and metal speciation affected the uptake rate of Ag from AgNPs. Comparison of the results from organisms of similar weight showed that water hardness affected the uptake of Ag from AgNPs, but not that from AgNO3. Transformation (dissolution and aggregation) of the AgNPs was also influenced by water hardness and the capping agent. Bioavailability of Ag from AgNPs was, in turn, correlated to these physical changes. Water hardness increased the aggregation of AgNPs, especially for PEG AgNPs, reducing the bioavailability of Ag from PEG AgNPs to a greater degree than from PVP AgNPs. Higher dissolved Ag concentrations were measured for the PVP AgNPs (15%) compared to PEG AgNPs (3%) in moderately hard water, enhancing Ag bioavailability of the former. Multiple drivers of bioavailability yielded differences in Ag influx between very hard and deionized water where the uptake rate constants (kuw, l g-1 d-1 ± SE) varied from 3.1 ± 0.7 to 0.2 ± 0.01 for PEG AgNPs and from 2.3 ± 0.02 to 1.3 ± 0.01 for PVP AgNPs. Modeling bioavailability of Ag from NPs revealed that Ag influx into L. stagnalis comprised uptake from the NPs themselves and from newly dissolved Ag.

  1. Automatically Generated Algorithms for the Vertex Coloring Problem

    PubMed Central

    Contreras Bolton, Carlos; Gatica, Gustavo; Parada, Víctor

    2013-01-01

    The vertex coloring problem is a classical problem in combinatorial optimization that consists of assigning a color to each vertex of a graph such that no adjacent vertices share the same color, minimizing the number of colors used. Despite the various practical applications that exist for this problem, its NP-hardness still represents a computational challenge. Some of the best computational results obtained for this problem are consequences of hybridizing the various known heuristics. Automatically revising the space constituted by combining these techniques to find the most adequate combination has received less attention. In this paper, we propose exploring the heuristics space for the vertex coloring problem using evolutionary algorithms. We automatically generate three new algorithms by combining elementary heuristics. To evaluate the new algorithms, a computational experiment was performed that allowed comparing them numerically with existing heuristics. The obtained algorithms present an average 29.97% relative error, while four other heuristics selected from the literature present a 59.73% error, considering 29 of the more difficult instances in the DIMACS benchmark. PMID:23516506

  2. Extremal Optimization for Quadratic Unconstrained Binary Problems

    NASA Astrophysics Data System (ADS)

    Boettcher, S.

    We present an implementation of τ-EO for quadratic unconstrained binary optimization (QUBO) problems. To this end, we transform modify QUBO from its conventional Boolean presentation into a spin glass with a random external field on each site. These fields tend to be rather large compared to the typical coupling, presenting EO with a challenging two-scale problem, exploring smaller differences in couplings effectively while sufficiently aligning with those strong external fields. However, we also find a simple solution to that problem that indicates that those external fields apparently tilt the energy landscape to a such a degree such that global minima become more easy to find than those of spin glasses without (or very small) fields. We explore the impact of the weight distribution of the QUBO formulation in the operations research literature and analyze their meaning in a spin-glass language. This is significant because QUBO problems are considered among the main contenders for NP-hard problems that could be solved efficiently on a quantum computer such as D-Wave.

  3. Entanglement and the fermion sign problem in auxiliary field quantum Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Broecker, Peter; Trebst, Simon

    2016-08-01

    Quantum Monte Carlo simulations of fermions are hampered by the notorious sign problem whose most striking manifestation is an exponential growth of sampling errors with the number of particles. With the sign problem known to be an NP-hard problem and any generic solution thus highly elusive, the Monte Carlo sampling of interacting many-fermion systems is commonly thought to be restricted to a small class of model systems for which a sign-free basis has been identified. Here we demonstrate that entanglement measures, in particular the so-called Rényi entropies, can intrinsically exhibit a certain robustness against the sign problem in auxiliary-field quantum Monte Carlo approaches and possibly allow for the identification of global ground-state properties via their scaling behavior even in the presence of a strong sign problem. We corroborate these findings via numerical simulations of fermionic quantum phase transitions of spinless fermions on the honeycomb lattice at and below half filling.

  4. Optimizing a realistic large-scale frequency assignment problem using a new parallel evolutionary approach

    NASA Astrophysics Data System (ADS)

    Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.

    2011-08-01

    This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.

  5. Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control

    NASA Astrophysics Data System (ADS)

    Kamyar, Reza

    In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors, and analyze systems with 100+ dimensional state-space. Furthermore, we extend our algorithms to analyze robust stability over more complicated geometries such as hypercubes and arbitrary convex polytopes. Our algorithms can be readily extended to address a wide variety of problems in control such as Hinfinity synthesis for systems with parametric uncertainty and computing control Lyapunov functions.

  6. Ant colony optimization for solving university facility layout problem

    NASA Astrophysics Data System (ADS)

    Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin

    2013-04-01

    Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).

  7. A Generalization of SAT and #SAT for Robust Policy Evaluation

    DTIC Science & Technology

    2014-06-30

    PNP [1]. Corollary 1. PPNP [1] ⊆ NP PP Proof. Follows from Toda’s theorem [ Toda , 1991] (middle inclusion): PPNP [1] ⊆ PP PH ⊆ P PP ⊆ NP PP . 8 This...dynamic programming for first-order POMDPs. In AAAI, 2010. S. Toda . PP is as hard as the polynomial-time hierarchy. SIAM Journal on Computing, 20:865

  8. Learning from Bees: An Approach for Influence Maximization on Viral Campaigns

    PubMed Central

    Sankar, C. Prem; S., Asharaf

    2016-01-01

    Maximisation of influence propagation is a key ingredient to any viral marketing or socio-political campaigns. However, it is an NP-hard problem, and various approximate algorithms have been suggested to address the issue, though not largely successful. In this paper, we propose a bio-inspired approach to select the initial set of nodes which is significant in rapid convergence towards a sub-optimal solution in minimal runtime. The performance of the algorithm is evaluated using the re-tweet network of the hashtag #KissofLove on Twitter associated with the non-violent protest against the moral policing spread to many parts of India. Comparison with existing centrality based node ranking process the proposed method significant improvement on influence propagation. The proposed algorithm is one of the hardly few bio-inspired algorithms in network theory. We also report the results of the exploratory analysis of the network kiss of love campaign. PMID:27992472

  9. The effect of hardness on the stability of citrate-stabilized gold nanoparticles and their uptake by Daphnia magna.

    PubMed

    Lee, Byung-Tae; Ranville, James F

    2012-04-30

    The stability and uptake by Daphnia magna of citrate-stabilized gold nanoparticles (AuNPs) in three different hardness-adjusted synthetic waters were investigated. Negatively charged AuNPs were found to aggregate and settle in synthetic waters within 24 h. Sedimentation rates depended on initial particle concentrations of 0.02, 0.04, and 0.08 nM AuNPs. Hardness of the synthetic waters affected the aggregation of AuNPs and is explained by the compression of diffuse double layer of AuNPs due to the increasing ionic strength. The fractal dimension of AuNPs in the reaction-limited regime of synthetic waters averaged 2.228±0.126 implying the rigid structures of aggregates driven by the collision of small particles with the growing aggregates. Four-day old D. magna accumulated more than 90% of AuNPs in 0.04 nM AuNP suspensions without any observed mortality. Exposure to pre-aggregated AuNP for 48 h in hard water did not show any significant difference in uptake, suggesting D. magna can also ingest settled AuNP aggregates. D. magna exposed to AuNPs shed their exoskeleton whereas the control did not generate any molts over 48 h. This implies that D. magna removed AuNPs on their exoskeleton by producing molts to decrease any adverse effects of adhered AuNPs. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Minimizing makespan in a two-stage flow shop with parallel batch-processing machines and re-entrant jobs

    NASA Astrophysics Data System (ADS)

    Huang, J. D.; Liu, J. J.; Chen, Q. X.; Mao, N.

    2017-06-01

    Against a background of heat-treatment operations in mould manufacturing, a two-stage flow-shop scheduling problem is described for minimizing makespan with parallel batch-processing machines and re-entrant jobs. The weights and release dates of jobs are non-identical, but job processing times are equal. A mixed-integer linear programming model is developed and tested with small-scale scenarios. Given that the problem is NP hard, three heuristic construction methods with polynomial complexity are proposed. The worst case of the new constructive heuristic is analysed in detail. A method for computing lower bounds is proposed to test heuristic performance. Heuristic efficiency is tested with sets of scenarios. Compared with the two improved heuristics, the performance of the new constructive heuristic is superior.

  11. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    NASA Astrophysics Data System (ADS)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  12. An approximation algorithm for the Noah's Ark problem with random feature loss.

    PubMed

    Hickey, Glenn; Blanchette, Mathieu; Carmi, Paz; Maheshwari, Anil; Zeh, Norbert

    2011-01-01

    The phylogenetic diversity (PD) of a set of species is a measure of their evolutionary distinctness based on a phylogenetic tree. PD is increasingly being adopted as an index of biodiversity in ecological conservation projects. The Noah's Ark Problem (NAP) is an NP-Hard optimization problem that abstracts a fundamental conservation challenge in asking to maximize the expected PD of a set of taxa given a fixed budget, where each taxon is associated with a cost of conservation and a probability of extinction. Only simplified instances of the problem, where one or more parameters are fixed as constants, have as of yet been addressed in the literature. Furthermore, it has been argued that PD is not an appropriate metric for models that allow information to be lost along paths in the tree. We therefore generalize the NAP to incorporate a proposed model of feature loss according to an exponential distribution and term this problem NAP with Loss (NAPL). In this paper, we present a pseudopolynomial time approximation scheme for NAPL.

  13. Job shop scheduling problem with late work criterion

    NASA Astrophysics Data System (ADS)

    Piroozfard, Hamed; Wong, Kuan Yew

    2015-05-01

    Scheduling is considered as a key task in many industries, such as project based scheduling, crew scheduling, flight scheduling, machine scheduling, etc. In the machine scheduling area, the job shop scheduling problems are considered to be important and highly complex, in which they are characterized as NP-hard. The job shop scheduling problems with late work criterion and non-preemptive jobs are addressed in this paper. Late work criterion is a fairly new objective function. It is a qualitative measure and concerns with late parts of the jobs, unlike classical objective functions that are quantitative measures. In this work, simulated annealing was presented to solve the scheduling problem. In addition, operation based representation was used to encode the solution, and a neighbourhood search structure was employed to search for the new solutions. The case studies are Lawrence instances that were taken from the Operations Research Library. Computational results of this probabilistic meta-heuristic algorithm were compared with a conventional genetic algorithm, and a conclusion was made based on the algorithm and problem.

  14. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  15. Solving a supply chain scheduling problem with non-identical job sizes and release times by applying a novel effective heuristic algorithm

    NASA Astrophysics Data System (ADS)

    Pei, Jun; Liu, Xinbao; Pardalos, Panos M.; Fan, Wenjuan; Wang, Ling; Yang, Shanlin

    2016-03-01

    Motivated by applications in manufacturing industry, we consider a supply chain scheduling problem, where each job is characterised by non-identical sizes, different release times and unequal processing times. The objective is to minimise the makespan by making batching and sequencing decisions. The problem is formalised as a mixed integer programming model and proved to be strongly NP-hard. Some structural properties are presented for both the general case and a special case. Based on these properties, a lower bound is derived, and a novel two-phase heuristic (TP-H) is developed to solve the problem, which guarantees to obtain a worst case performance ratio of ?. Computational experiments with a set of different sizes of random instances are conducted to evaluate the proposed approach TP-H, which is superior to another two heuristics proposed in the literature. Furthermore, the experimental results indicate that TP-H can effectively and efficiently solve large-size problems in a reasonable time.

  16. Approximability of the d-dimensional Euclidean capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Khachay, Michael; Dubinin, Roman

    2016-10-01

    Capacitated Vehicle Routing Problem (CVRP) is the well known intractable combinatorial optimization problem, which remains NP-hard even in the Euclidean plane. Since the introduction of this problem in the middle of the 20th century, many researchers were involved into the study of its approximability. Most of the results obtained in this field are based on the well known Iterated Tour Partition heuristic proposed by M. Haimovich and A. Rinnoy Kan in their celebrated paper, where they construct the first Polynomial Time Approximation Scheme (PTAS) for the single depot CVRP in ℝ2. For decades, this result was extended by many authors to numerous useful modifications of the problem taking into account multiple depots, pick up and delivery options, time window restrictions, etc. But, to the best of our knowledge, almost none of these results go beyond the Euclidean plane. In this paper, we try to bridge this gap and propose a EPTAS for the Euclidean CVRP for any fixed dimension.

  17. Exploring biological interaction networks with tailored weighted quasi-bicliques

    PubMed Central

    2012-01-01

    Background Biological networks provide fundamental insights into the functional characterization of genes and their products, the characterization of DNA-protein interactions, the identification of regulatory mechanisms, and other biological tasks. Due to the experimental and biological complexity, their computational exploitation faces many algorithmic challenges. Results We introduce novel weighted quasi-biclique problems to identify functional modules in biological networks when represented by bipartite graphs. In difference to previous quasi-biclique problems, we include biological interaction levels by using edge-weighted quasi-bicliques. While we prove that our problems are NP-hard, we also describe IP formulations to compute exact solutions for moderately sized networks. Conclusions We verify the effectiveness of our IP solutions using both simulation and empirical data. The simulation shows high quasi-biclique recall rates, and the empirical data corroborate the abilities of our weighted quasi-bicliques in extracting features and recovering missing interactions from biological networks. PMID:22759421

  18. Tag SNP selection via a genetic algorithm.

    PubMed

    Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh

    2010-10-01

    Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.

  19. An improved genetic algorithm and its application in the TSP problem

    NASA Astrophysics Data System (ADS)

    Li, Zheng; Qin, Jinlei

    2011-12-01

    Concept and research actuality of genetic algorithm are introduced in detail in the paper. Under this condition, the simple genetic algorithm and an improved algorithm are described and applied in an example of TSP problem, where the advantage of genetic algorithm is adequately shown in solving the NP-hard problem. In addition, based on partial matching crossover operator, the crossover operator method is improved into extended crossover operator in order to advance the efficiency when solving the TSP. In the extended crossover method, crossover operator can be performed between random positions of two random individuals, which will not be restricted by the position of chromosome. Finally, the nine-city TSP is solved using the improved genetic algorithm with extended crossover method, the efficiency of whose solution process is much higher, besides, the solving speed of the optimal solution is much faster.

  20. Optimal matching for prostate brachytherapy seed localization with dimension reduction.

    PubMed

    Lee, Junghoon; Labat, Christian; Jain, Ameet K; Song, Danny Y; Burdette, Everette C; Fichtinger, Gabor; Prince, Jerry L

    2009-01-01

    In prostate brachytherapy, x-ray fluoroscopy has been used for intra-operative dosimetry to provide qualitative assessment of implant quality. More recent developments have made possible 3D localization of the implanted radioactive seeds. This is usually modeled as an assignment problem and solved by resolving the correspondence of seeds. It is, however, NP-hard, and the problem is even harder in practice due to the significant number of hidden seeds. In this paper, we propose an algorithm that can find an optimal solution from multiple projection images with hidden seeds. It solves an equivalent problem with reduced dimensional complexity, thus allowing us to find an optimal solution in polynomial time. Simulation results show the robustness of the algorithm. It was validated on 5 phantom and 18 patient datasets, successfully localizing the seeds with detection rate of > or = 97.6% and reconstruction error of < or = 1.2 mm. This is considered to be clinically excellent performance.

  1. A review on simple assembly line balancing type-e problem

    NASA Astrophysics Data System (ADS)

    Jusop, M.; Rashid, M. F. F. Ab

    2015-12-01

    Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.

  2. Harmony search algorithm: application to the redundancy optimization problem

    NASA Astrophysics Data System (ADS)

    Nahas, Nabil; Thien-My, Dao

    2010-09-01

    The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.

  3. Fully Decomposable Split Graphs

    NASA Astrophysics Data System (ADS)

    Broersma, Hajo; Kratsch, Dieter; Woeginger, Gerhard J.

    We discuss various questions around partitioning a split graph into connected parts. Our main result is a polynomial time algorithm that decides whether a given split graph is fully decomposable, i.e., whether it can be partitioned into connected parts of order α 1,α 2,...,α k for every α 1,α 2,...,α k summing up to the order of the graph. In contrast, we show that the decision problem whether a given split graph can be partitioned into connected parts of order α 1,α 2,...,α k for a given partition α 1,α 2,...,α k of the order of the graph, is NP-hard.

  4. A proof of the log-concavity conjecture related to the computation of the ergodic capacity of MIMO channels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurvitis, Leonid

    2009-01-01

    An upper bound on the ergodic capacity of MIMO channels was introduced recently in [1]. This upper bound amounts to the maximization on the simplex of some multilinear polynomial p({lambda}{sub 1}, ..., {lambda}{sub n}) with non-negative coefficients. In general, such maximizations problems are NP-HARD. But if say, the functional log(p) is concave on the simplex and can be efficiently evaluated, then the maximization can also be done efficiently. Such log-concavity was conjectured in [1]. We give in this paper self-contained proof of the conjecture, based on the theory of H-Stable polynomials.

  5. BetaSCPWeb: side-chain prediction for protein structures using Voronoi diagrams and geometry prioritization

    PubMed Central

    Ryu, Joonghyun; Lee, Mokwon; Cha, Jehyun; Laskowski, Roman A.; Ryu, Seong Eon; Kim, Deok-Soo

    2016-01-01

    Many applications, such as protein design, homology modeling, flexible docking, etc. require the prediction of a protein's optimal side-chain conformations from just its amino acid sequence and backbone structure. Side-chain prediction (SCP) is an NP-hard energy minimization problem. Here, we present BetaSCPWeb which efficiently computes a conformation close to optimal using a geometry-prioritization method based on the Voronoi diagram of spherical atoms. Its outputs are visual, textual and PDB file format. The web server is free and open to all users at http://voronoi.hanyang.ac.kr/betascpweb with no login requirement. PMID:27151195

  6. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  7. Mechanical properties of nanoporous GaN and its application for separation and transfer of GaN thin films.

    PubMed

    Huang, Shanjin; Zhang, Yu; Leung, Benjamin; Yuan, Ge; Wang, Gang; Jiang, Hao; Fan, Yingmin; Sun, Qian; Wang, Jianfeng; Xu, Ke; Han, Jung

    2013-11-13

    Nanoporous (NP) gallium nitride (GaN) as a new class of GaN material has many interesting properties that the conventional GaN material does not have. In this paper, we focus on the mechanical properties of NP GaN, and the detailed physical mechanism of porous GaN in the application of liftoff. A decrease in elastic modulus and hardness was identified in NP GaN compared to the conventional GaN film. The promising application of NP GaN as release layers in the mechanical liftoff of GaN thin films and devices was systematically studied. A phase diagram was generated to correlate the initial NP GaN profiles with the as-overgrown morphologies of the NP structures. The fracture toughness of the NP GaN release layer was studied in terms of the voided-space-ratio. It is shown that the transformed morphologies and fracture toughness of the NP GaN layer after overgrowth strongly depends on the initial porosity of NP GaN templates. The mechanical separation and transfer of a GaN film over a 2 in. wafer was demonstrated, which proves that this technique is useful in practical applications.

  8. Using Ant Colony Optimization for Routing in VLSI Chips

    NASA Astrophysics Data System (ADS)

    Arora, Tamanna; Moses, Melanie

    2009-04-01

    Rapid advances in VLSI technology have increased the number of transistors that fit on a single chip to about two billion. A frequent problem in the design of such high performance and high density VLSI layouts is that of routing wires that connect such large numbers of components. Most wire-routing problems are computationally hard. The quality of any routing algorithm is judged by the extent to which it satisfies routing constraints and design objectives. Some of the broader design objectives include minimizing total routed wire length, and minimizing total capacitance induced in the chip, both of which serve to minimize power consumed by the chip. Ant Colony Optimization algorithms (ACO) provide a multi-agent framework for combinatorial optimization by combining memory, stochastic decision and strategies of collective and distributed learning by ant-like agents. This paper applies ACO to the NP-hard problem of finding optimal routes for interconnect routing on VLSI chips. The constraints on interconnect routing are used by ants as heuristics which guide their search process. We found that ACO algorithms were able to successfully incorporate multiple constraints and route interconnects on suite of benchmark chips. On an average, the algorithm routed with total wire length 5.5% less than other established routing algorithms.

  9. GreedyMAX-type Algorithms for the Maximum Independent Set Problem

    NASA Astrophysics Data System (ADS)

    Borowiecki, Piotr; Göring, Frank

    A maximum independent set problem for a simple graph G = (V,E) is to find the largest subset of pairwise nonadjacent vertices. The problem is known to be NP-hard and it is also hard to approximate. Within this article we introduce a non-negative integer valued function p defined on the vertex set V(G) and called a potential function of a graph G, while P(G) = max v ∈ V(G) p(v) is called a potential of G. For any graph P(G) ≤ Δ(G), where Δ(G) is the maximum degree of G. Moreover, Δ(G) - P(G) may be arbitrarily large. A potential of a vertex lets us get a closer insight into the properties of its neighborhood which leads to the definition of the family of GreedyMAX-type algorithms having the classical GreedyMAX algorithm as their origin. We establish a lower bound 1/(P + 1) for the performance ratio of GreedyMAX-type algorithms which favorably compares with the bound 1/(Δ + 1) known to hold for GreedyMAX. The cardinality of an independent set generated by any GreedyMAX-type algorithm is at least sum_{vin V(G)} (p(v)+1)^{-1}, which strengthens the bounds of Turán and Caro-Wei stated in terms of vertex degrees.

  10. Hydraulic Conductivity Estimation using Bayesian Model Averaging and Generalized Parameterization

    NASA Astrophysics Data System (ADS)

    Tsai, F. T.; Li, X.

    2006-12-01

    Non-uniqueness in parameterization scheme is an inherent problem in groundwater inverse modeling due to limited data. To cope with the non-uniqueness problem of parameterization, we introduce a Bayesian Model Averaging (BMA) method to integrate a set of selected parameterization methods. The estimation uncertainty in BMA includes the uncertainty in individual parameterization methods as the within-parameterization variance and the uncertainty from using different parameterization methods as the between-parameterization variance. Moreover, the generalized parameterization (GP) method is considered in the geostatistical framework in this study. The GP method aims at increasing the flexibility of parameterization through the combination of a zonation structure and an interpolation method. The use of BMP with GP avoids over-confidence in a single parameterization method. A normalized least-squares estimation (NLSE) is adopted to calculate the posterior probability for each GP. We employee the adjoint state method for the sensitivity analysis on the weighting coefficients in the GP method. The adjoint state method is also applied to the NLSE problem. The proposed methodology is implemented to the Alamitos Barrier Project (ABP) in California, where the spatially distributed hydraulic conductivity is estimated. The optimal weighting coefficients embedded in GP are identified through the maximum likelihood estimation (MLE) where the misfits between the observed and calculated groundwater heads are minimized. The conditional mean and conditional variance of the estimated hydraulic conductivity distribution using BMA are obtained to assess the estimation uncertainty.

  11. On simulated annealing phase transitions in phylogeny reconstruction.

    PubMed

    Strobl, Maximilian A R; Barker, Daniel

    2016-08-01

    Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  12. BOOTSTRAPPING THE CORONAL MAGNETIC FIELD WITH STEREO: UNIPOLAR POTENTIAL FIELD MODELING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aschwanden, Markus J.; Sandman, Anne W., E-mail: aschwanden@lmsal.co

    We investigate the recently quantified misalignment of {alpha}{sub mis} {approx} 20{sup 0}-40{sup 0} between the three-dimensional geometry of stereoscopically triangulated coronal loops observed with STEREO/EUVI (in four active regions (ARs)) and theoretical (potential or nonlinear force-free) magnetic field models extrapolated from photospheric magnetograms. We develop an efficient method of bootstrapping the coronal magnetic field by forward fitting a parameterized potential field model to the STEREO-observed loops. The potential field model consists of a number of unipolar magnetic charges that are parameterized by decomposing a photospheric magnetogram from the Michelson Doppler Imager. The forward-fitting method yields a best-fit magnetic field modelmore » with a reduced misalignment of {alpha}{sub PF} {approx} 13{sup 0}-20{sup 0}. We also evaluate stereoscopic measurement errors and find a contribution of {alpha}{sub SE} {approx} 7{sup 0}-12{sup 0}, which constrains the residual misalignment to {alpha}{sub NP} {approx} 11{sup 0}-17{sup 0}, which is likely due to the nonpotentiality of the ARs. The residual misalignment angle, {alpha}{sub NP}, of the potential field due to nonpotentiality is found to correlate with the soft X-ray flux of the AR, which implies a relationship between electric currents and plasma heating.« less

  13. Formal language constrained path problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvablemore » efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.« less

  14. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    PubMed Central

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  15. Minimizing the Sum of Completion Times with Resource Dependant Times

    NASA Astrophysics Data System (ADS)

    Yedidsion, Liron; Shabtay, Dvir; Kaspi, Moshe

    2008-10-01

    We extend the classical minimization sum of completion times problem to the case where the processing times are controllable by allocating a nonrenewable resource. The quality of a solution is measured by two different criteria. The first criterion is the sum of completion times and the second is the total weighted resource consumption. We consider four different problem variations for treating the two criteria. We prove that this problem is NP-hard for three of the four variations even if all resource consumption weights are equal. However, somewhat surprisingly, the variation of minimizing the integrated objective function is solvable in polynomial time. Although the sum of completion times is arguably the most important scheduling criteria, the complexity of this problem, up to this paper, was an open question for three of the four variations. The results of this research have various implementations, including efficient battery usage on mobile devices such as mobile computer, phones and GPS devices in order to prolong their battery duration.

  16. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    DOE PAGES

    Yaw, Sean; Mumey, Brendan

    2017-10-28

    Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less

  17. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less

  18. A hybrid binary particle swarm optimization for large capacitated multi item multi level lot sizing (CMIMLLS) problem

    NASA Astrophysics Data System (ADS)

    Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.

    2016-09-01

    The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.

  19. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem.

    PubMed

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  20. Scheduling Non-Preemptible Jobs to Minimize Peak Demand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yaw, Sean; Mumey, Brendan

    Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less

  1. A novel hybrid genetic algorithm to solve the make-to-order sequence-dependent flow-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.

    2014-04-01

    Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.

  2. Million city traveling salesman problem solution by divide and conquer clustering with adaptive resonance neural networks.

    PubMed

    Mulder, Samuel A; Wunsch, Donald C

    2003-01-01

    The Traveling Salesman Problem (TSP) is a very hard optimization problem in the field of operations research. It has been shown to be NP-complete, and is an often-used benchmark for new optimization techniques. One of the main challenges with this problem is that standard, non-AI heuristic approaches such as the Lin-Kernighan algorithm (LK) and the chained LK variant are currently very effective and in wide use for the common fully connected, Euclidean variant that is considered here. This paper presents an algorithm that uses adaptive resonance theory (ART) in combination with a variation of the Lin-Kernighan local optimization algorithm to solve very large instances of the TSP. The primary advantage of this algorithm over traditional LK and chained-LK approaches is the increased scalability and parallelism allowed by the divide-and-conquer clustering paradigm. Tours obtained by the algorithm are lower quality, but scaling is much better and there is a high potential for increasing performance using parallel hardware.

  3. Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.

    PubMed

    Ćwik, Michał; Józefczyk, Jerzy

    2018-01-01

    An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.

  4. The Computational Complexity of RaceTrack

    NASA Astrophysics Data System (ADS)

    Holzer, Markus; McKenzie, Pierre

    Martin Gardner in the early 1970's described the game of RaceTrack [M. Gardner, Mathematical games - Sim, Chomp and Race Track: new games for the intellect (and not for Lady Luck), Scientific American, 228(1):108-115, Jan. 1973]. Here we study the complexity of deciding whether a RaceTrack player has a winning strategy. We first prove that the complexity of RaceTrack reachability, i.e., whether the finish line can be reached or not, crucially depends on whether the car can touch the edge of the carriageway (racetrack): the non-touching variant is NL-complete while the touching variant is equivalent to the undirected grid graph reachability problem, a problem in L but not known to be L-hard. Then we show that single-player RaceTrack is NL-complete, regardless of whether driving on the track boundary is allowed or not, and that deciding the existence of a winning strategy in Gardner's original two-player game is P-complete. Hence RaceTrack is an example of a game that is interesting to play despite the fact that deciding the existence of a winning strategy is most likely not NP-hard.

  5. A Lifetime Maximization Relay Selection Scheme in Wireless Body Area Networks.

    PubMed

    Zhang, Yu; Zhang, Bing; Zhang, Shi

    2017-06-02

    Network Lifetime is one of the most important metrics in Wireless Body Area Networks (WBANs). In this paper, a relay selection scheme is proposed under the topology constrains specified in the IEEE 802.15.6 standard to maximize the lifetime of WBANs through formulating and solving an optimization problem where relay selection of each node acts as optimization variable. Considering the diversity of the sensor nodes in WBANs, the optimization problem takes not only energy consumption rate but also energy difference among sensor nodes into account to improve the network lifetime performance. Since it is Non-deterministic Polynomial-hard (NP-hard) and intractable, a heuristic solution is then designed to rapidly address the optimization. The simulation results indicate that the proposed relay selection scheme has better performance in network lifetime compared with existing algorithms and that the heuristic solution has low time complexity with only a negligible performance degradation gap from optimal value. Furthermore, we also conduct simulations based on a general WBAN model to comprehensively illustrate the advantages of the proposed algorithm. At the end of the evaluation, we validate the feasibility of our proposed scheme via an implementation discussion.

  6. Ranking Specific Sets of Objects.

    PubMed

    Maly, Jan; Woltran, Stefan

    2017-01-01

    Ranking sets of objects based on an order between the single elements has been thoroughly studied in the literature. In particular, it has been shown that it is in general impossible to find a total ranking - jointly satisfying properties as dominance and independence - on the whole power set of objects. However, in many applications certain elements from the entire power set might not be required and can be neglected in the ranking process. For instance, certain sets might be ruled out due to hard constraints or are not satisfying some background theory. In this paper, we treat the computational problem whether an order on a given subset of the power set of elements satisfying different variants of dominance and independence can be found, given a ranking on the elements. We show that this problem is tractable for partial rankings and NP-complete for total rankings.

  7. Optimal and heuristic algorithms of planning of low-rise residential buildings

    NASA Astrophysics Data System (ADS)

    Kartak, V. M.; Marchenko, A. A.; Petunin, A. A.; Sesekin, A. N.; Fabarisova, A. I.

    2017-10-01

    The problem of the optimal layout of low-rise residential building is considered. Each apartment must be no less than the corresponding apartment from the proposed list. Also all requests must be made and excess of the total square over of the total square of apartment from the list must be minimized. The difference in the squares formed due to with the discreteness of distances between bearing walls and a number of other technological limitations. It shown, that this problem is NP-hard. The authors built a linear-integer model and conducted her qualitative analysis. As well, authors developed a heuristic algorithm for the solution tasks of a high dimension. The computational experiment was conducted which confirming the efficiency of the proposed approach. Practical recommendations on the use the proposed algorithms are given.

  8. New scheduling rules for a dynamic flexible flow line problem with sequence-dependent setup times

    NASA Astrophysics Data System (ADS)

    Kia, Hamidreza; Ghodsypour, Seyed Hassan; Davoudpour, Hamid

    2017-09-01

    In the literature, the application of multi-objective dynamic scheduling problem and simple priority rules are widely studied. Although these rules are not efficient enough due to simplicity and lack of general insight, composite dispatching rules have a very suitable performance because they result from experiments. In this paper, a dynamic flexible flow line problem with sequence-dependent setup times is studied. The objective of the problem is minimization of mean flow time and mean tardiness. A 0-1 mixed integer model of the problem is formulated. Since the problem is NP-hard, four new composite dispatching rules are proposed to solve it by applying genetic programming framework and choosing proper operators. Furthermore, a discrete-event simulation model is made to examine the performances of scheduling rules considering four new heuristic rules and the six adapted heuristic rules from the literature. It is clear from the experimental results that composite dispatching rules that are formed from genetic programming have a better performance in minimization of mean flow time and mean tardiness than others.

  9. Learning optimal quantum models is NP-hard

    NASA Astrophysics Data System (ADS)

    Stark, Cyril J.

    2018-02-01

    Physical modeling translates measured data into a physical model. Physical modeling is a major objective in physics and is generally regarded as a creative process. How good are computers at solving this task? Here, we show that in the absence of physical heuristics, the inference of optimal quantum models cannot be computed efficiently (unless P=NP ). This result illuminates rigorous limits to the extent to which computers can be used to further our understanding of nature.

  10. Resilient Wireless Sensor Networks Using Topology Control: A Review

    PubMed Central

    Huang, Yuanjiang; Martínez, José-Fernán; Sendra, Juana; López, Lourdes

    2015-01-01

    Wireless sensor networks (WSNs) may be deployed in failure-prone environments, and WSNs nodes easily fail due to unreliable wireless connections, malicious attacks and resource-constrained features. Nevertheless, if WSNs can tolerate at most losing k − 1 nodes while the rest of nodes remain connected, the network is called k − connected. k is one of the most important indicators for WSNs’ self-healing capability. Following a WSN design flow, this paper surveys resilience issues from the topology control and multi-path routing point of view. This paper provides a discussion on transmission and failure models, which have an important impact on research results. Afterwards, this paper reviews theoretical results and representative topology control approaches to guarantee WSNs to be k − connected at three different network deployment stages: pre-deployment, post-deployment and re-deployment. Multi-path routing protocols are discussed, and many NP-complete or NP-hard problems regarding topology control are identified. The challenging open issues are discussed at the end. This paper can serve as a guideline to design resilient WSNs. PMID:26404272

  11. A meta-heuristic method for solving scheduling problem: crow search algorithm

    NASA Astrophysics Data System (ADS)

    Adhi, Antono; Santosa, Budi; Siswanto, Nurhadi

    2018-04-01

    Scheduling is one of the most important processes in an industry both in manufacturingand services. The scheduling process is the process of selecting resources to perform an operation on tasks. Resources can be machines, peoples, tasks, jobs or operations.. The selection of optimum sequence of jobs from a permutation is an essential issue in every research in scheduling problem. Optimum sequence becomes optimum solution to resolve scheduling problem. Scheduling problem becomes NP-hard problem since the number of job in the sequence is more than normal number can be processed by exact algorithm. In order to obtain optimum results, it needs a method with capability to solve complex scheduling problems in an acceptable time. Meta-heuristic is a method usually used to solve scheduling problem. The recently published method called Crow Search Algorithm (CSA) is adopted in this research to solve scheduling problem. CSA is an evolutionary meta-heuristic method which is based on the behavior in flocks of crow. The calculation result of CSA for solving scheduling problem is compared with other algorithms. From the comparison, it is found that CSA has better performance in term of optimum solution and time calculation than other algorithms.

  12. BetaSCPWeb: side-chain prediction for protein structures using Voronoi diagrams and geometry prioritization.

    PubMed

    Ryu, Joonghyun; Lee, Mokwon; Cha, Jehyun; Laskowski, Roman A; Ryu, Seong Eon; Kim, Deok-Soo

    2016-07-08

    Many applications, such as protein design, homology modeling, flexible docking, etc. require the prediction of a protein's optimal side-chain conformations from just its amino acid sequence and backbone structure. Side-chain prediction (SCP) is an NP-hard energy minimization problem. Here, we present BetaSCPWeb which efficiently computes a conformation close to optimal using a geometry-prioritization method based on the Voronoi diagram of spherical atoms. Its outputs are visual, textual and PDB file format. The web server is free and open to all users at http://voronoi.hanyang.ac.kr/betascpweb with no login requirement. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  13. Fully Dynamic Bin Packing

    NASA Astrophysics Data System (ADS)

    Ivković, Zoran; Lloyd, Errol L.

    Classic bin packing seeks to pack a given set of items of possibly varying sizes into a minimum number of identical sized bins. A number of approximation algorithms have been proposed for this NP-hard problem for both the on-line and off-line cases. In this chapter we discuss fully dynamic bin packing, where items may arrive (Insert) and depart (Delete) dynamically. In accordance with standard practice for fully dynamic algorithms, it is assumed that the packing may be arbitrarily rearranged to accommodate arriving and departing items. The goal is to maintain an approximately optimal solution of provably high quality in a total amount of time comparable to that used by an off-line algorithm delivering a solution of the same quality.

  14. A Unified View of Global Instability of Compressible Flow over Open Cavities

    DTIC Science & Technology

    2006-03-28

    in terms of number of steps realized by the DNS code per second (S/sec) as the number of processors ( np ) increases. For this comparison the “new...computations). It may clearly be seen that both solutions performed comparably well at low number of processors; however, as np increased, the Myrinet...has subsequently been designed, hard -coded and validated at nu modelling. Design characteristics of the code have been a) high-accuracy, b

  15. Dynamic cellular manufacturing system considering machine failure and workload balance

    NASA Astrophysics Data System (ADS)

    Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad

    2018-02-01

    Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.

  16. A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem

    NASA Astrophysics Data System (ADS)

    Jolai, Fariborz; Assadipour, Ghazal

    Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.

  17. A 16-bit Coherent Ising Machine for One-Dimensional Ring and Cubic Graph Problems

    NASA Astrophysics Data System (ADS)

    Takata, Kenta; Marandi, Alireza; Hamerly, Ryan; Haribara, Yoshitaka; Maruo, Daiki; Tamate, Shuhei; Sakaguchi, Hiromasa; Utsunomiya, Shoko; Yamamoto, Yoshihisa

    2016-09-01

    Many tasks in our modern life, such as planning an efficient travel, image processing and optimizing integrated circuit design, are modeled as complex combinatorial optimization problems with binary variables. Such problems can be mapped to finding a ground state of the Ising Hamiltonian, thus various physical systems have been studied to emulate and solve this Ising problem. Recently, networks of mutually injected optical oscillators, called coherent Ising machines, have been developed as promising solvers for the problem, benefiting from programmability, scalability and room temperature operation. Here, we report a 16-bit coherent Ising machine based on a network of time-division-multiplexed femtosecond degenerate optical parametric oscillators. The system experimentally gives more than 99.6% of success rates for one-dimensional Ising ring and nondeterministic polynomial-time (NP) hard instances. The experimental and numerical results indicate that gradual pumping of the network combined with multiple spectral and temporal modes of the femtosecond pulses can improve the computational performance of the Ising machine, offering a new path for tackling larger and more complex instances.

  18. Redundancy allocation problem for k-out-of- n systems with a choice of redundancy strategies

    NASA Astrophysics Data System (ADS)

    Aghaei, Mahsa; Zeinal Hamadani, Ali; Abouei Ardakan, Mostafa

    2017-03-01

    To increase the reliability of a specific system, using redundant components is a common method which is called redundancy allocation problem (RAP). Some of the RAP studies have focused on k-out-of- n systems. However, all of these studies assumed predetermined active or standby strategies for each subsystem. In this paper, for the first time, we propose a k-out-of- n system with a choice of redundancy strategies. Therefore, a k-out-of- n series-parallel system is considered when the redundancy strategy can be chosen for each subsystem. In other words, in the proposed model, the redundancy strategy is considered as an additional decision variable and an exact method based on integer programming is used to obtain the optimal solution of the problem. As the optimization of RAP belongs to the NP-hard class of problems, a modified version of genetic algorithm (GA) is also developed. The exact method and the proposed GA are implemented on a well-known test problem and the results demonstrate the efficiency of the new approach compared with the previous studies.

  19. Chromosome structures: reduction of certain problems with unequal gene content and gene paralogs to integer linear programming.

    PubMed

    Lyubetsky, Vassily; Gershgorin, Roman; Gorbunov, Konstantin

    2017-12-06

    Chromosome structure is a very limited model of the genome including the information about its chromosomes such as their linear or circular organization, the order of genes on them, and the DNA strand encoding a gene. Gene lengths, nucleotide composition, and intergenic regions are ignored. Although highly incomplete, such structure can be used in many cases, e.g., to reconstruct phylogeny and evolutionary events, to identify gene synteny, regulatory elements and promoters (considering highly conserved elements), etc. Three problems are considered; all assume unequal gene content and the presence of gene paralogs. The distance problem is to determine the minimum number of operations required to transform one chromosome structure into another and the corresponding transformation itself including the identification of paralogs in two structures. We use the DCJ model which is one of the most studied combinatorial rearrangement models. Double-, sesqui-, and single-operations as well as deletion and insertion of a chromosome region are considered in the model; the single ones comprise cut and join. In the reconstruction problem, a phylogenetic tree with chromosome structures in the leaves is given. It is necessary to assign the structures to inner nodes of the tree to minimize the sum of distances between terminal structures of each edge and to identify the mutual paralogs in a fairly large set of structures. A linear algorithm is known for the distance problem without paralogs, while the presence of paralogs makes it NP-hard. If paralogs are allowed but the insertion and deletion operations are missing (and special constraints are imposed), the reduction of the distance problem to integer linear programming is known. Apparently, the reconstruction problem is NP-hard even in the absence of paralogs. The problem of contigs is to find the optimal arrangements for each given set of contigs, which also includes the mutual identification of paralogs. We proved that these problems can be reduced to integer linear programming formulations, which allows an algorithm to redefine the problems to implement a very special case of the integer linear programming tool. The results were tested on synthetic and biological samples. Three well-known problems were reduced to a very special case of integer linear programming, which is a new method of their solutions. Integer linear programming is clearly among the main computational methods and, as generally accepted, is fast on average; in particular, computation systems specifically targeted at it are available. The challenges are to reduce the size of the corresponding integer linear programming formulations and to incorporate a more detailed biological concept in our model of the reconstruction.

  20. Enhancement in volatile organic compound sensitivity of aged Ag nanoparticle aggregates by plasma exposure

    NASA Astrophysics Data System (ADS)

    Hosomi, Kei; Ozaki, Koichi; Nishiyama, Fumitaka; Takahiro, Katsumi

    2018-01-01

    Silver nanoparticles (Ag NPs) tarnish easily upon exposure to ambient air, and eventually lose their ability as a plasmonic sensor via weakened localized surface plasmon resonance (LSPR). We have demonstrated the enhancement in plasmonic sensitivity of tarnished Ag NP aggregates to vapors of volatile organic compounds (VOCs) such as ethanol and butanol by Ar plasma exposure. The response of Ag NP aggregates to the VOC vapors was examined by measuring the change in optical extinction spectra before and after exposure to the vapors. The sensitivity of Ag NP aggregates decreased gradually when stored in ambient air. The performance of tarnished Ag NPs for ethanol sensing was recovered by exposure to argon (Ar) plasma for 15 s. The reduction from oxidized Ag to metallic one was recognized, while morphological change was hardly noticeable after the plasma exposure. We conclude, therefore, that a compositional change rather than a morphological change occurred on Ag NP surfaces enhances the sensing ability of tarnished Ag NP aggregates to the VOC vapors.

  1. SLIDER: a generic metaheuristic for the discovery of correlated motifs in protein-protein interaction networks.

    PubMed

    Boyen, Peter; Van Dyck, Dries; Neven, Frank; van Ham, Roeland C H J; van Dijk, Aalt D J

    2011-01-01

    Correlated motif mining (cmm) is the problem of finding overrepresented pairs of patterns, called motifs, in sequences of interacting proteins. Algorithmic solutions for cmm thereby provide a computational method for predicting binding sites for protein interaction. In this paper, we adopt a motif-driven approach where the support of candidate motif pairs is evaluated in the network. We experimentally establish the superiority of the Chi-square-based support measure over other support measures. Furthermore, we obtain that cmm is an np-hard problem for a large class of support measures (including Chi-square) and reformulate the search for correlated motifs as a combinatorial optimization problem. We then present the generic metaheuristic slider which uses steepest ascent with a neighborhood function based on sliding motifs and employs the Chi-square-based support measure. We show that slider outperforms existing motif-driven cmm methods and scales to large protein-protein interaction networks. The slider-implementation and the data used in the experiments are available on http://bioinformatics.uhasselt.be.

  2. A new mathematical modeling for pure parsimony haplotyping problem.

    PubMed

    Feizabadi, R; Bagherian, M; Vaziri, H R; Salahi, M

    2016-11-01

    Pure parsimony haplotyping (PPH) problem is important in bioinformatics because rational haplotyping inference plays important roles in analysis of genetic data, mapping complex genetic diseases such as Alzheimer's disease, heart disorders and etc. Haplotypes and genotypes are m-length sequences. Although several integer programing models have already been presented for PPH problem, its NP-hardness characteristic resulted in ineffectiveness of those models facing the real instances especially instances with many heterozygous sites. In this paper, we assign a corresponding number to each haplotype and genotype and based on those numbers, we set a mixed integer programing model. Using numbers, instead of sequences, would lead to less complexity of the new model in comparison with previous models in a way that there are neither constraints nor variables corresponding to heterozygous nucleotide sites in it. Experimental results approve the efficiency of the new model in producing better solution in comparison to two state-of-the art haplotyping approaches. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Binary Bees Algorithm - bioinspiration from the foraging mechanism of honeybees to optimize a multiobjective multidimensional assignment problem

    NASA Astrophysics Data System (ADS)

    Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan

    2011-11-01

    The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.

  4. Quantum proofs can be verified using only single-qubit measurements

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Nagaj, Daniel; Schuch, Norbert

    2016-02-01

    Quantum Merlin Arthur (QMA) is the class of problems which, though potentially hard to solve, have a quantum solution that can be verified efficiently using a quantum computer. It thus forms a natural quantum version of the classical complexity class NP (and its probabilistic variant MA, Merlin-Arthur games), where the verifier has only classical computational resources. In this paper, we study what happens when we restrict the quantum resources of the verifier to the bare minimum: individual measurements on single qubits received as they come, one by one. We find that despite this grave restriction, it is still possible to soundly verify any problem in QMA for the verifier with the minimum quantum resources possible, without using any quantum memory or multiqubit operations. We provide two independent proofs of this fact, based on measurement-based quantum computation and the local Hamiltonian problem. The former construction also applies to QMA1, i.e., QMA with one-sided error.

  5. Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment

    NASA Astrophysics Data System (ADS)

    Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.

    2017-03-01

    Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.

  6. Approximate ground states of the random-field Potts model from graph cuts

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Kumar, Ravinder; Weigel, Martin; Banerjee, Varsha; Janke, Wolfhard; Puri, Sanjay

    2018-05-01

    While the ground-state problem for the random-field Ising model is polynomial, and can be solved using a number of well-known algorithms for maximum flow or graph cut, the analog random-field Potts model corresponds to a multiterminal flow problem that is known to be NP-hard. Hence an efficient exact algorithm is very unlikely to exist. As we show here, it is nevertheless possible to use an embedding of binary degrees of freedom into the Potts spins in combination with graph-cut methods to solve the corresponding ground-state problem approximately in polynomial time. We benchmark this heuristic algorithm using a set of quasiexact ground states found for small systems from long parallel tempering runs. For a not-too-large number q of Potts states, the method based on graph cuts finds the same solutions in a fraction of the time. We employ the new technique to analyze the breakup length of the random-field Potts model in two dimensions.

  7. Meta-RaPS Algorithm for the Aerial Refueling Scheduling Problem

    NASA Technical Reports Server (NTRS)

    Kaplan, Sezgin; Arin, Arif; Rabadi, Ghaith

    2011-01-01

    The Aerial Refueling Scheduling Problem (ARSP) can be defined as determining the refueling completion times for each fighter aircraft (job) on multiple tankers (machines). ARSP assumes that jobs have different release times and due dates, The total weighted tardiness is used to evaluate schedule's quality. Therefore, ARSP can be modeled as a parallel machine scheduling with release limes and due dates to minimize the total weighted tardiness. Since ARSP is NP-hard, it will be more appropriate to develop a pproimate or heuristic algorithm to obtain solutions in reasonable computation limes. In this paper, Meta-Raps-ATC algorithm is implemented to create high quality solutions. Meta-RaPS (Meta-heuristic for Randomized Priority Search) is a recent and promising meta heuristic that is applied by introducing randomness to a construction heuristic. The Apparent Tardiness Rule (ATC), which is a good rule for scheduling problems with tardiness objective, is used to construct initial solutions which are improved by an exchanging operation. Results are presented for generated instances.

  8. TemperSAT: A new efficient fair-sampling random k-SAT solver

    NASA Astrophysics Data System (ADS)

    Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.

    The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.

  9. Approximation algorithm for the problem of partitioning a sequence into clusters

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Mikhailova, L. V.; Khamidullin, S. A.; Khandeev, V. I.

    2017-08-01

    We consider the problem of partitioning a finite sequence of Euclidean points into a given number of clusters (subsequences) using the criterion of the minimal sum (over all clusters) of intercluster sums of squared distances from the elements of the clusters to their centers. It is assumed that the center of one of the desired clusters is at the origin, while the center of each of the other clusters is unknown and determined as the mean value over all elements in this cluster. Additionally, the partition obeys two structural constraints on the indices of sequence elements contained in the clusters with unknown centers: (1) the concatenation of the indices of elements in these clusters is an increasing sequence, and (2) the difference between an index and the preceding one is bounded above and below by prescribed constants. It is shown that this problem is strongly NP-hard. A 2-approximation algorithm is constructed that is polynomial-time for a fixed number of clusters.

  10. The fast algorithm of spark in compressive sensing

    NASA Astrophysics Data System (ADS)

    Xie, Meihua; Yan, Fengxia

    2017-01-01

    Compressed Sensing (CS) is an advanced theory on signal sampling and reconstruction. In CS theory, the reconstruction condition of signal is an important theory problem, and spark is a good index to study this problem. But the computation of spark is NP hard. In this paper, we study the problem of computing spark. For some special matrixes, for example, the Gaussian random matrix and 0-1 random matrix, we obtain some conclusions. Furthermore, for Gaussian random matrix with fewer rows than columns, we prove that its spark equals to the number of its rows plus one with probability 1. For general matrix, two methods are given to compute its spark. One is the method of directly searching and the other is the method of dual-tree searching. By simulating 24 Gaussian random matrixes and 18 0-1 random matrixes, we tested the computation time of these two methods. Numerical results showed that the dual-tree searching method had higher efficiency than directly searching, especially for those matrixes which has as much as rows and columns.

  11. Facility Layout Problems Using Bays: A Survey

    NASA Astrophysics Data System (ADS)

    Davoudpour, Hamid; Jaafari, Amir Ardestani; Farahani, Leila Najafabadi

    2010-06-01

    Layout design is one of the most important activities done by industrial Engineers. Most of these problems have NP hard Complexity. In a basic layout design, each cell is represented by a rectilinear, but not necessarily convex polygon. The set of fully packed adjacent polygons is known as a block layout (Asef-Vaziri and Laporte 2007). Block layout is divided by slicing tree and bay layout. In bay layout, departments are located in vertical columns or horizontal rows, bays. Bay layout is used in real worlds especially in concepts such as semiconductor and aisles. There are several reviews in facility layout; however none of them focus on bay layout. The literature analysis given here is not limited to specific considerations about bay layout design. We present a state of art review for bay layout considering some issues such as the used objectives, the techniques of solving and the integration methods in bay.

  12. Efficient greedy algorithms for economic manpower shift planning

    NASA Astrophysics Data System (ADS)

    Nearchou, A. C.; Giannikos, I. C.; Lagodimos, A. G.

    2015-01-01

    Consideration is given to the economic manpower shift planning (EMSP) problem, an NP-hard capacity planning problem appearing in various industrial settings including the packing stage of production in process industries and maintenance operations. EMSP aims to determine the manpower needed in each available workday shift of a given planning horizon so as to complete a set of independent jobs at minimum cost. Three greedy heuristics are presented for the EMSP solution. These practically constitute adaptations of an existing algorithm for a simplified version of EMSP which had shown excellent performance in terms of solution quality and speed. Experimentation shows that the new algorithms perform very well in comparison to the results obtained by both the CPLEX optimizer and an existing metaheuristic. Statistical analysis is deployed to rank the algorithms in terms of their solution quality and to identify the effects that critical planning factors may have on their relative efficiency.

  13. FPFH-based graph matching for 3D point cloud registration

    NASA Astrophysics Data System (ADS)

    Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua

    2018-04-01

    Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.

  14. Fast Construction of Near Parsimonious Hybridization Networks for Multiple Phylogenetic Trees.

    PubMed

    Mirzaei, Sajad; Wu, Yufeng

    2016-01-01

    Hybridization networks represent plausible evolutionary histories of species that are affected by reticulate evolutionary processes. An established computational problem on hybridization networks is constructing the most parsimonious hybridization network such that each of the given phylogenetic trees (called gene trees) is "displayed" in the network. There have been several previous approaches, including an exact method and several heuristics, for this NP-hard problem. However, the exact method is only applicable to a limited range of data, and heuristic methods can be less accurate and also slow sometimes. In this paper, we develop a new algorithm for constructing near parsimonious networks for multiple binary gene trees. This method is more efficient for large numbers of gene trees than previous heuristics. This new method also produces more parsimonious results on many simulated datasets as well as a real biological dataset than a previous method. We also show that our method produces topologically more accurate networks for many datasets.

  15. Single molecule sequencing-guided scaffolding and correction of draft assemblies.

    PubMed

    Zhu, Shenglong; Chen, Danny Z; Emrich, Scott J

    2017-12-06

    Although single molecule sequencing is still improving, the lengths of the generated sequences are inevitably an advantage in genome assembly. Prior work that utilizes long reads to conduct genome assembly has mostly focused on correcting sequencing errors and improving contiguity of de novo assemblies. We propose a disassembling-reassembling approach for both correcting structural errors in the draft assembly and scaffolding a target assembly based on error-corrected single molecule sequences. To achieve this goal, we formulate a maximum alternating path cover problem. We prove that this problem is NP-hard, and solve it by a 2-approximation algorithm. Our experimental results show that our approach can improve the structural correctness of target assemblies in the cost of some contiguity, even with smaller amounts of long reads. In addition, our reassembling process can also serve as a competitive scaffolder relative to well-established assembly benchmarks.

  16. The Complexity of Folding Self-Folding Origami

    NASA Astrophysics Data System (ADS)

    Stern, Menachem; Pinson, Matthew B.; Murugan, Arvind

    2017-10-01

    Why is it difficult to refold a previously folded sheet of paper? We show that even crease patterns with only one designed folding motion inevitably contain an exponential number of "distractor" folding branches accessible from a bifurcation at the flat state. Consequently, refolding a sheet requires finding the ground state in a glassy energy landscape with an exponential number of other attractors of higher energy, much like in models of protein folding (Levinthal's paradox) and other NP-hard satisfiability (SAT) problems. As in these problems, we find that refolding a sheet requires actuation at multiple carefully chosen creases. We show that seeding successful folding in this way can be understood in terms of subpatterns that fold when cut out ("folding islands"). Besides providing guidelines for the placement of active hinges in origami applications, our results point to fundamental limits on the programmability of energy landscapes in sheets.

  17. A bi-objective model for robust yard allocation scheduling for outbound containers

    NASA Astrophysics Data System (ADS)

    Liu, Changchun; Zhang, Canrong; Zheng, Li

    2017-01-01

    This article examines the yard allocation problem for outbound containers, with consideration of uncertainty factors, mainly including the arrival and operation time of calling vessels. Based on the time buffer inserting method, a bi-objective model is constructed to minimize the total operational cost and to maximize the robustness of fighting against the uncertainty. Due to the NP-hardness of the constructed model, a two-stage heuristic is developed to solve the problem. In the first stage, initial solutions are obtained by a greedy algorithm that looks n-steps ahead with the uncertainty factors set as their respective expected values; in the second stage, based on the solutions obtained in the first stage and with consideration of uncertainty factors, a neighbourhood search heuristic is employed to generate robust solutions that can fight better against the fluctuation of uncertainty factors. Finally, extensive numerical experiments are conducted to test the performance of the proposed method.

  18. Typical performance of approximation algorithms for NP-hard problems

    NASA Astrophysics Data System (ADS)

    Takabe, Satoshi; Hukushima, Koji

    2016-11-01

    Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.

  19. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.

  20. Connected Component Model for Multi-Object Tracking.

    PubMed

    He, Zhenyu; Li, Xin; You, Xinge; Tao, Dacheng; Tang, Yuan Yan

    2016-08-01

    In multi-object tracking, it is critical to explore the data associations by exploiting the temporal information from a sequence of frames rather than the information from the adjacent two frames. Since straightforwardly obtaining data associations from multi-frames is an NP-hard multi-dimensional assignment (MDA) problem, most existing methods solve this MDA problem by either developing complicated approximate algorithms, or simplifying MDA as a 2D assignment problem based upon the information extracted only from adjacent frames. In this paper, we show that the relation between associations of two observations is the equivalence relation in the data association problem, based on the spatial-temporal constraint that the trajectories of different objects must be disjoint. Therefore, the MDA problem can be equivalently divided into independent subproblems by equivalence partitioning. In contrast to existing works for solving the MDA problem, we develop a connected component model (CCM) by exploiting the constraints of the data association and the equivalence relation on the constraints. Based upon CCM, we can efficiently obtain the global solution of the MDA problem for multi-object tracking by optimizing a sequence of independent data association subproblems. Experiments on challenging public data sets demonstrate that our algorithm outperforms the state-of-the-art approaches.

  1. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  2. Surface chemistry of gold nanoparticles determines the biocorona composition impacting cellular uptake, toxicity and gene expression profiles in human endothelial cells.

    PubMed

    Chandran, Parwathy; Riviere, Jim E; Monteiro-Riviere, Nancy A

    2017-05-01

    This study investigated the role of nanoparticle size and surface chemistry on biocorona composition and its effect on uptake, toxicity and cellular responses in human umbilical vein endothelial cells (HUVEC), employing 40 and 80 nm gold nanoparticles (AuNP) with branched polyethyleneimine (BPEI), lipoic acid (LA) and polyethylene glycol (PEG) coatings. Proteomic analysis identified 59 hard corona proteins among the various AuNP, revealing largely surface chemistry-dependent signature adsorbomes exhibiting human serum albumin (HSA) abundance. Size distribution analysis revealed the relative instability and aggregation inducing potential of bare and corona-bound BPEI-AuNP, over LA- and PEG-AuNP. Circular dichroism analysis showed surface chemistry-dependent conformational changes of proteins binding to AuNP. Time-dependent uptake of bare, plasma corona (PC) and HSA corona-bound AuNP (HSA-AuNP) showed significant reduction in uptake with PC formation. Cell viability studies demonstrated dose-dependent toxicity of BPEI-AuNP. Transcriptional profiling studies revealed 126 genes, from 13 biological pathways, to be differentially regulated by 40 nm bare and PC-bound BPEI-AuNP (PC-BPEI-AuNP). Furthermore, PC formation relieved the toxicity of cationic BPEI-AuNP by modulating expression of genes involved in DNA damage and repair, heat shock response, mitochondrial energy metabolism, oxidative stress and antioxidant response, and ER stress and unfolded protein response cascades, which were aberrantly expressed in bare BPEI-AuNP-treated cells. NP surface chemistry is shown to play the dominant role over size in determining the biocorona composition, which in turn modulates cell uptake, and biological responses, consequently defining the potential safety and efficacy of nanoformulations.

  3. Static electric dipole polarizabilities of tri- and tetravalent U, Np, and Pu ions.

    PubMed

    Parmar, Payal; Peterson, Kirk A; Clark, Aurora E

    2013-11-21

    High-quality static electric dipole polarizabilities have been determined for the ground states of the hard-sphere cations of U, Np, and Pu in the III and IV oxidation states. The polarizabilities have been calculated using the numerical finite field technique in a four-component relativistic framework. Methods including Fock-space coupled cluster (FSCC) and Kramers-restricted configuration interaction (KRCI) have been performed in order to account for electron correlation effects. Comparisons between polarizabilities calculated using Dirac-Hartree-Fock (DHF), FSCC, and KRCI methods have been made using both triple- and quadruple-ζ basis sets for U(4+). In addition to the ground state, this study also reports the polarizability data for the first two excited states of U(3+/4+), Np(3+/4+), and Pu(3+/4+) ions at different levels of theory. The values reported in this work are the most accurate to date calculations for the dipole polarizabilities of the hard-sphere tri- and tetravalent actinide ions and may serve as reference values, aiding in the calculation of various electronic and response properties (for example, intermolecular forces, optical properties, etc.) relevant to the nuclear fuel cycle and material science applications.

  4. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  5. A noniterative greedy algorithm for multiframe point correspondence.

    PubMed

    Shafique, Khurram; Shah, Mubarak

    2005-01-01

    This paper presents a framework for finding point correspondences in monocular image sequences over multiple frames. The general problem of multiframe point correspondence is NP-hard for three or more frames. A polynomial time algorithm for a restriction of this problem is presented and is used as the basis of the proposed greedy algorithm for the general problem. The greedy nature of the proposed algorithm allows it to be used in real-time systems for tracking and surveillance, etc. In addition, the proposed algorithm deals with the problems of occlusion, missed detections, and false positives by using a single noniterative greedy optimization scheme and, hence, reduces the complexity of the overall algorithm as compared to most existing approaches where multiple heuristics are used for the same purpose. While most greedy algorithms for point tracking do not allow for entry and exit of the points from the scene, this is not a limitation for the proposed algorithm. Experiments with real and synthetic data over a wide range of scenarios and system parameters are presented to validate the claims about the performance of the proposed algorithm.

  6. Efficient RNA structure comparison algorithms.

    PubMed

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  7. Scheduling algorithm for flow shop with two batch-processing machines and arbitrary job sizes

    NASA Astrophysics Data System (ADS)

    Cheng, Bayi; Yang, Shanlin; Hu, Xiaoxuan; Li, Kai

    2014-03-01

    This article considers the problem of scheduling two batch-processing machines in flow shop where the jobs have arbitrary sizes and the machines have limited capacity. The jobs are processed in batches and the total size of jobs in each batch cannot exceed the machine capacity. Once a batch is being processed, no interruption is allowed until all the jobs in it are completed. The problem of minimising makespan is NP-hard in the strong sense. First, we present a mathematical model of the problem using integer programme. We show the scale of feasible solutions of the problem and provide optimality properties. Then, we propose a polynomial time algorithm with running time in O(nlogn). The jobs are first assigned in feasible batches and then scheduled on machines. For the general case, we prove that the proposed algorithm has a performance guarantee of 4. For the special case where the processing times of each job on the two machines satisfy p 1 j = ap 2 j , the performance guarantee is ? for a > 0.

  8. Sequential Test Strategies for Multiple Fault Isolation

    NASA Technical Reports Server (NTRS)

    Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.

    1997-01-01

    In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.

  9. Quantum Optimization of Fully Connected Spin Glasses

    NASA Astrophysics Data System (ADS)

    Venturelli, Davide; Mandrà, Salvatore; Knysh, Sergey; O'Gorman, Bryan; Biswas, Rupak; Smelyanskiy, Vadim

    2015-07-01

    Many NP-hard problems can be seen as the task of finding a ground state of a disordered highly connected Ising spin glass. If solutions are sought by means of quantum annealing, it is often necessary to represent those graphs in the annealer's hardware by means of the graph-minor embedding technique, generating a final Hamiltonian consisting of coupled chains of ferromagnetically bound spins, whose binding energy is a free parameter. In order to investigate the effect of embedding on problems of interest, the fully connected Sherrington-Kirkpatrick model with random ±1 couplings is programmed on the D-Wave TwoTM annealer using up to 270 qubits interacting on a Chimera-type graph. We present the best embedding prescriptions for encoding the Sherrington-Kirkpatrick problem in the Chimera graph. The results indicate that the optimal choice of embedding parameters could be associated with the emergence of the spin-glass phase of the embedded problem, whose presence was previously uncertain. This optimal parameter setting allows the performance of the quantum annealer to compete with (and potentially outperform, in the absence of analog control errors) optimized simulated annealing algorithms.

  10. New optimization model for routing and spectrum assignment with nodes insecurity

    NASA Astrophysics Data System (ADS)

    Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli

    2017-04-01

    By adopting the orthogonal frequency division multiplexing technology, elastic optical networks can provide the flexible and variable bandwidth allocation to each connection request and get higher spectrum utilization. The routing and spectrum assignment problem in elastic optical network is a well-known NP-hard problem. In addition, information security has received worldwide attention. We combine these two problems to investigate the routing and spectrum assignment problem with the guaranteed security in elastic optical network, and establish a new optimization model to minimize the maximum index of the used frequency slots, which is used to determine an optimal routing and spectrum assignment schemes. To solve the model effectively, a hybrid genetic algorithm framework integrating a heuristic algorithm into a genetic algorithm is proposed. The heuristic algorithm is first used to sort the connection requests and then the genetic algorithm is designed to look for an optimal routing and spectrum assignment scheme. In the genetic algorithm, tailor-made crossover, mutation and local search operators are designed. Moreover, simulation experiments are conducted with three heuristic strategies, and the experimental results indicate that the effectiveness of the proposed model and algorithm framework.

  11. Computing Role Assignments of Proper Interval Graphs in Polynomial Time

    NASA Astrophysics Data System (ADS)

    Heggernes, Pinar; van't Hof, Pim; Paulusma, Daniël

    A homomorphism from a graph G to a graph R is locally surjective if its restriction to the neighborhood of each vertex of G is surjective. Such a homomorphism is also called an R-role assignment of G. Role assignments have applications in distributed computing, social network theory, and topological graph theory. The Role Assignment problem has as input a pair of graphs (G,R) and asks whether G has an R-role assignment. This problem is NP-complete already on input pairs (G,R) where R is a path on three vertices. So far, the only known non-trivial tractable case consists of input pairs (G,R) where G is a tree. We present a polynomial time algorithm that solves Role Assignment on all input pairs (G,R) where G is a proper interval graph. Thus we identify the first graph class other than trees on which the problem is tractable. As a complementary result, we show that the problem is Graph Isomorphism-hard on chordal graphs, a superclass of proper interval graphs and trees.

  12. The Subset Sum game☆

    PubMed Central

    Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim

    2014-01-01

    In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem. PMID:25844012

  13. Finding long chains in kidney exchange using the traveling salesman problem.

    PubMed

    Anderson, Ross; Ashlagi, Itai; Gamarnik, David; Roth, Alvin E

    2015-01-20

    As of May 2014 there were more than 100,000 patients on the waiting list for a kidney transplant from a deceased donor. Although the preferred treatment is a kidney transplant, every year there are fewer donors than new patients, so the wait for a transplant continues to grow. To address this shortage, kidney paired donation (KPD) programs allow patients with living but biologically incompatible donors to exchange donors through cycles or chains initiated by altruistic (nondirected) donors, thereby increasing the supply of kidneys in the system. In many KPD programs a centralized algorithm determines which exchanges will take place to maximize the total number of transplants performed. This optimization problem has proven challenging both in theory, because it is NP-hard, and in practice, because the algorithms previously used were unable to optimally search over all long chains. We give two new algorithms that use integer programming to optimally solve this problem, one of which is inspired by the techniques used to solve the traveling salesman problem. These algorithms provide the tools needed to find optimal solutions in practice.

  14. Finding long chains in kidney exchange using the traveling salesman problem

    PubMed Central

    Anderson, Ross; Ashlagi, Itai; Gamarnik, David; Roth, Alvin E.

    2015-01-01

    As of May 2014 there were more than 100,000 patients on the waiting list for a kidney transplant from a deceased donor. Although the preferred treatment is a kidney transplant, every year there are fewer donors than new patients, so the wait for a transplant continues to grow. To address this shortage, kidney paired donation (KPD) programs allow patients with living but biologically incompatible donors to exchange donors through cycles or chains initiated by altruistic (nondirected) donors, thereby increasing the supply of kidneys in the system. In many KPD programs a centralized algorithm determines which exchanges will take place to maximize the total number of transplants performed. This optimization problem has proven challenging both in theory, because it is NP-hard, and in practice, because the algorithms previously used were unable to optimally search over all long chains. We give two new algorithms that use integer programming to optimally solve this problem, one of which is inspired by the techniques used to solve the traveling salesman problem. These algorithms provide the tools needed to find optimal solutions in practice. PMID:25561535

  15. Surveillance of a 2D Plane Area with 3D Deployed Cameras

    PubMed Central

    Fu, Yi-Ge; Zhou, Jie; Deng, Lei

    2014-01-01

    As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353

  16. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  17. On the Complexity of Duplication-Transfer-Loss Reconciliation with Non-Binary Gene Trees.

    PubMed

    Kordi, Misagh; Bansal, Mukul S

    2017-01-01

    Duplication-Transfer-Loss (DTL) reconciliation has emerged as a powerful technique for studying gene family evolution in the presence of horizontal gene transfer. DTL reconciliation takes as input a gene family phylogeny and the corresponding species phylogeny, and reconciles the two by postulating speciation, gene duplication, horizontal gene transfer, and gene loss events. Efficient algorithms exist for finding optimal DTL reconciliations when the gene tree is binary. However, gene trees are frequently non-binary. With such non-binary gene trees, the reconciliation problem seeks to find a binary resolution of the gene tree that minimizes the reconciliation cost. Given the prevalence of non-binary gene trees, many efficient algorithms have been developed for this problem in the context of the simpler Duplication-Loss (DL) reconciliation model. Yet, no efficient algorithms exist for DTL reconciliation with non-binary gene trees and the complexity of the problem remains unknown. In this work, we resolve this open question by showing that the problem is, in fact, NP-hard. Our reduction applies to both the dated and undated formulations of DTL reconciliation. By resolving this long-standing open problem, this work will spur the development of both exact and heuristic algorithms for this important problem.

  18. Evaluating clustering methods within the Artificial Ecosystem Algorithm and their application to bike redistribution in London.

    PubMed

    Adham, Manal T; Bentley, Peter J

    2016-08-01

    This paper proposes and evaluates a solution to the truck redistribution problem prominent in London's Santander Cycle scheme. Due to the complexity of this NP-hard combinatorial optimisation problem, no efficient optimisation techniques are known to solve the problem exactly. This motivates our use of the heuristic Artificial Ecosystem Algorithm (AEA) to find good solutions in a reasonable amount of time. The AEA is designed to take advantage of highly distributed computer architectures and adapt to changing problems. In the AEA a problem is first decomposed into its relative sub-components; they then evolve solution building blocks that fit together to form a single optimal solution. Three variants of the AEA centred on evaluating clustering methods are presented: the baseline AEA, the community-based AEA which groups stations according to journey flows, and the Adaptive AEA which actively modifies clusters to cater for changes in demand. We applied these AEA variants to the redistribution problem prominent in bike share schemes (BSS). The AEA variants are empirically evaluated using historical data from Santander Cycles to validate the proposed approach and prove its potential effectiveness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Sorting by Cuts, Joins, and Whole Chromosome Duplications.

    PubMed

    Zeira, Ron; Shamir, Ron

    2017-02-01

    Genome rearrangement problems have been extensively studied due to their importance in biology. Most studied models assumed a single copy per gene. However, in reality, duplicated genes are common, most notably in cancer. In this study, we make a step toward handling duplicated genes by considering a model that allows the atomic operations of cut, join, and whole chromosome duplication. Given two linear genomes, [Formula: see text] with one copy per gene and [Formula: see text] with two copies per gene, we give a linear time algorithm for computing a shortest sequence of operations transforming [Formula: see text] into [Formula: see text] such that all intermediate genomes are linear. We also show that computing an optimal sequence with fewest duplications is NP-hard.

  20. Complexity and approximability of quantified and stochastic constraint satisfaction problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Stearns, R. L.; Marathe, M. V.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SAT{sub c}(S)). Here, we study simultaneously the complexity of and the existence of efficient approximation algorithms for a number of variants of the problems SAT(S) and SAT{sub c}(S), and for many different D, C, and S.more » These problem variants include decision and optimization problems, for formulas, quantified formulas stochastically-quantified formulas. We denote these problems by Q-SAT(S), MAX-Q-SAT(S), S-SAT(S), MAX-S-SAT(S) MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S). The main contribution is the development of a unified predictive theory for characterizing the the complexity of these problems. Our unified approach is based on the following basic two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Let k {ge} 2. Let S be a finite set of finite-arity relations on {Sigma}{sub k} with the following condition on S: All finite arity relations on {Sigma}{sub k} can be represented as finite existentially-quantified conjunctions of relations in S applied to variables (to variables and constant symbols in C), Then we prove the following new results: (1) The problems SAT(S) and SAT{sub c}(S) are both NQL-complete and {le}{sub logn}{sup bw}-complete for NP. (2) The problems Q-SAT(S), Q-SAT{sub c}(S), are PSPACE-complete. Letting k = 2, the problem S-SAT(S) and S-SAT{sub c}(S) are PSPACE-complete. (3) {exists} {epsilon} > 0 for which approximating the problems MAX-Q-SAT(S) within {epsilon} times optimum is PSPACE-hard. Letting k =: 2, {exists} {epsilon} > 0 for which approximating the problems MAX-S-SAT(S) within {epsilon} times optimum is PSPACE-hard. (4) {forall} {epsilon} > 0 the problems MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S), are PSPACE-hard to approximate within a factor of n{sup {epsilon}} times optimum. These results significantly extend the earlier results by (i) Papadimitriou [Pa851] on complexity of stochastic satisfiability, (ii) Condon, Feigenbaum, Lund and Shor [CF+93, CF+94] by identifying natural classes of PSPACE-hard optimization problems with provably PSPACE-hard {epsilon}-approximation problems. Moreover, most of our results hold not just for Boolean relations: most previous results were done only in the context of Boolean domains. The results also constitute as a significant step towards obtaining a dichotomy theorems for the problems MAX-S-SAT(S) and MAX-Q-SAT(S): a research area of recent interest [CF+93, CF+94, Cr95, KSW97, LMP99].« less

  1. Hyperactivity/Inattention Problems Moderate Environmental but Not Genetic Mediation between Negative Parenting and Conduct Problems

    ERIC Educational Resources Information Center

    Fujisawa, Keiko K.; Yamagata, Shinji; Ozaki, Koken; Ando, Juko

    2012-01-01

    This study investigated the association between negative parenting (NP) and conduct problems (CP) in 6-year-old twins, taking into account the severity of hyperactivity/inattention problems (HIAP). Analyses of the data from 1,677 pairs of twins and their parents revealed that the shared environmental covariance between NP and CP was moderated by…

  2. Risk factors for readmission after neonatal cardiac surgery.

    PubMed

    Mackie, Andrew S; Gauvreau, Kimberlee; Newburger, Jane W; Mayer, John E; Erickson, Lars C

    2004-12-01

    Repeat hospitalizations place a significant burden on health care resources. Factors predisposing infants to unplanned hospital readmission after congenital heart surgery are unknown. This is a single-center, case-control study. Cases were rehospitalized or died within 30 days of discharge following an arterial switch operation (ASO) or Norwood procedure (NP) between 1992 and 2002. Controls underwent an ASO or NP between 1992 and 2002, and were neither readmitted nor died within 30 days of discharge. Patients and controls were matched by gender, year of birth, and procedure. Potential risk factors examined included indices of medical status at the time of discharge, determinants of access to health care, and provider characteristics. Forty-eight patients were readmitted; 19 of 498 (3.8%) following an ASO and 29 of 254 (11.4%) after a NP (p < 0.001). Six infants died within 30 days of discharge; 1 after an ASO and 5 after a NP. In multivariate analysis, predictors of readmission or death were: residual hemodynamic problem(s) (odds ratio [OR] 4.10 [1.18, 14.3], p = 0.026); an intensive care unit stay greater than 7 days (OR 5.17 [1.12, 23.9] p = 0.035) (ASO); residual hemodynamic problem(s) (OR 5.84 [1.98, 17.2], p = 0.001); and establishment of full oral intake less than 2 days before discharge (OR 5.83 [1.83, 18.6], p = 0.003) (NP). Combining both groups, living in a low income Zip Code (< 30,000 dollars/annum) was associated with a lower likelihood of readmission (OR 0.25 [0.07, 0.85], p = 0.027). Residual hemodynamic problem(s) predispose to hospital readmission after the ASO and NP. Low socioeconomic status may reduce the likelihood of readmission even when problems arise.

  3. Learning graph matching.

    PubMed

    Caetano, Tibério S; McAuley, Julian J; Cheng, Li; Le, Quoc V; Smola, Alex J

    2009-06-01

    As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm.

  4. An improved genetic algorithm for multidimensional optimization of precedence-constrained production planning and scheduling

    NASA Astrophysics Data System (ADS)

    Dao, Son Duy; Abhary, Kazem; Marian, Romeo

    2017-06-01

    Integration of production planning and scheduling is a class of problems commonly found in manufacturing industry. This class of problems associated with precedence constraint has been previously modeled and optimized by the authors, in which, it requires a multidimensional optimization at the same time: what to make, how many to make, where to make and the order to make. It is a combinatorial, NP-hard problem, for which no polynomial time algorithm is known to produce an optimal result on a random graph. In this paper, the further development of Genetic Algorithm (GA) for this integrated optimization is presented. Because of the dynamic nature of the problem, the size of its solution is variable. To deal with this variability and find an optimal solution to the problem, GA with new features in chromosome encoding, crossover, mutation, selection as well as algorithm structure is developed herein. With the proposed structure, the proposed GA is able to "learn" from its experience. Robustness of the proposed GA is demonstrated by a complex numerical example in which performance of the proposed GA is compared with those of three commercial optimization solvers.

  5. High performance genetic algorithm for VLSI circuit partitioning

    NASA Astrophysics Data System (ADS)

    Dinu, Simona

    2016-12-01

    Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.

  6. IFSM fractal image compression with entropy and sparsity constraints: A sequential quadratic programming approach

    NASA Astrophysics Data System (ADS)

    Kunze, Herb; La Torre, Davide; Lin, Jianyi

    2017-01-01

    We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.

  7. Achieving Crossed Strong Barrier Coverage in Wireless Sensor Network.

    PubMed

    Han, Ruisong; Yang, Wei; Zhang, Li

    2018-02-10

    Barrier coverage has been widely used to detect intrusions in wireless sensor networks (WSNs). It can fulfill the monitoring task while extending the lifetime of the network. Though barrier coverage in WSNs has been intensively studied in recent years, previous research failed to consider the problem of intrusion in transversal directions. If an intruder knows the deployment configuration of sensor nodes, then there is a high probability that it may traverse the whole target region from particular directions, without being detected. In this paper, we introduce the concept of crossed barrier coverage that can overcome this defect. We prove that the problem of finding the maximum number of crossed barriers is NP-hard and integer linear programming (ILP) is used to formulate the optimization problem. The branch-and-bound algorithm is adopted to determine the maximum number of crossed barriers. In addition, we also propose a multi-round shortest path algorithm (MSPA) to solve the optimization problem, which works heuristically to guarantee efficiency while maintaining near-optimal solutions. Several conventional algorithms for finding the maximum number of disjoint strong barriers are also modified to solve the crossed barrier problem and for the purpose of comparison. Extensive simulation studies demonstrate the effectiveness of MSPA.

  8. Optimization of topological quantum algorithms using Lattice Surgery is hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon

    The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.

  9. Multiscale Modeling of Plasmon-Exciton Dynamics of Malachite Green Monolayers on Gold Nanoparticles

    NASA Astrophysics Data System (ADS)

    Smith, Holden; Karam, Tony; Haber, Louis; Lopata, Kenneth

    A multi-scale hybrid quantum/classical approach using classical electrodynamics and a collection of discrete two level quantum system is used to investigate the coupling dynamics of malachite green monolayers adsorbed to the surface of a spherical gold nanoparticle (NP). This method utilizes finite difference time domain (FDTD) to describe the plasmonic response of the NP and a two-level quantum description for the molecule via the Maxwell/Liouville equation. The molecular parameters are parameterized using CASPT2 for the energies and transition dipole moments, with the dephasing lifetime fit to experiment. This approach is suited to simulating thousands of molecules on the surface of a plasmonic NP. There is good agreement with experimental extinction measurements, predicting the plasmon and molecule depletions. Additionally, this model captures the polariton peaks overlapped with a Fano-type resonance profile observed in the experimental extinction measurements. This technique shows promise for modeling plasmon/molecule interactions in chemical sensing and light harvesting in multi-chromophore systems. This material is based upon work supported by the National Science Foundation under the NSF EPSCoR Cooperative Agreement No. EPS-1003897 and the Louisiana Board of Regents Research Competitiveness Subprogram under Contract Number LEQSF(2014-17)-RD-A-0.

  10. Multiscale Modeling of Plasmon-Exciton Dynamics of Malachite Green Monolayers on Gold Nanoparticles

    NASA Astrophysics Data System (ADS)

    Smith, Holden; Karam, Tony; Haber, Louis; Lopata, Kenneth

    A multi-scale hybrid quantum/classical approach using classical electrodynamics and a collection of discrete two-level quantum system is used to investigate the coupling dynamics of malachite green monolayers adsorbed to the surface of a spherical gold nanoparticle (NP). This method utilizes finite difference time domain (FDTD) to describe the plasmonic response of the NP and a two-level quantum description for the molecule via the Maxwell/Liouville equation. The molecular parameters are parameterized using CASPT2 for the energies and transition dipole moments, with the dephasing lifetime fit to experiment. This approach is suited to simulating thousands of molecules on the surface of a plasmonic NP. There is good agreement with experimental extinction measurements, predicting the plasmon and molecule depletions. Additionally, this model captures the polariton peaks overlapped with a Fano-type resonance profile observed in the experimental extinction measurements. This technique shows promise for modeling plasmon/molecule interactions in chemical sensing and light harvesting in multi-chromophore systems. This material is based upon work supported by the National Science Foundation under the NSF EPSCoR Cooperative Agreement No. EPS-1003897 and by the Louisiana Board of Regents Research Competitiveness Subprogram under Contract Number LEQSF(2014-17)-RD-A-0.

  11. Multi-Scale Modeling and the Eddy-Diffusivity/Mass-Flux (EDMF) Parameterization

    NASA Astrophysics Data System (ADS)

    Teixeira, J.

    2015-12-01

    Turbulence and convection play a fundamental role in many key weather and climate science topics. Unfortunately, current atmospheric models cannot explicitly resolve most turbulent and convective flow. Because of this fact, turbulence and convection in the atmosphere has to be parameterized - i.e. equations describing the dynamical evolution of the statistical properties of turbulence and convection motions have to be devised. Recently a variety of different models have been developed that attempt at simulating the atmosphere using variable resolution. A key problem however is that parameterizations are in general not explicitly aware of the resolution - the scale awareness problem. In this context, we will present and discuss a specific approach, the Eddy-Diffusivity/Mass-Flux (EDMF) parameterization, that not only is in itself a multi-scale parameterization but it is also particularly well suited to deal with the scale-awareness problems that plague current variable-resolution models. It does so by representing small-scale turbulence using a classic Eddy-Diffusivity (ED) method, and the larger-scale (boundary layer and tropospheric-scale) eddies as a variety of plumes using the Mass-Flux (MF) concept.

  12. Model and algorithm for container ship stowage planning based on bin-packing problem

    NASA Astrophysics Data System (ADS)

    Zhang, Wei-Ying; Lin, Yan; Ji, Zhuo-Shang

    2005-09-01

    In a general case, container ship serves many different ports on each voyage. A stowage planning for container ship made at one port must take account of the influence on subsequent ports. So the complexity of stowage planning problem increases due to its multi-ports nature. This problem is NP-hard problem. In order to reduce the computational complexity, the problem is decomposed into two sub-problems in this paper. First, container ship stowage problem (CSSP) is regarded as “packing problem”, ship-bays on the board of vessel are regarded as bins, the number of slots at each bay are taken as capacities of bins, and containers with different characteristics (homogeneous containers group) are treated as items packed. At this stage, there are two objective functions, one is to minimize the number of bays packed by containers and the other is to minimize the number of overstows. Secondly, containers assigned to each bays at first stage are allocate to special slot, the objective functions are to minimize the metacentric height, heel and overstows. The taboo search heuristics algorithm are used to solve the subproblem. The main focus of this paper is on the first subproblem. A case certifies the feasibility of the model and algorithm.

  13. Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Luo, Yabo; Waden, Yongo P.

    2017-06-01

    Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.

  14. An effective hybrid self-adapting differential evolution algorithm for the joint replenishment and location-inventory problem in a three-level supply chain.

    PubMed

    Wang, Lin; Qu, Hui; Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem.

  15. An Effective Hybrid Self-Adapting Differential Evolution Algorithm for the Joint Replenishment and Location-Inventory Problem in a Three-Level Supply Chain

    PubMed Central

    Chen, Tao; Yan, Fang-Ping

    2013-01-01

    The integration with different decisions in the supply chain is a trend, since it can avoid the suboptimal decisions. In this paper, we provide an effective intelligent algorithm for a modified joint replenishment and location-inventory problem (JR-LIP). The problem of the JR-LIP is to determine the reasonable number and location of distribution centers (DCs), the assignment policy of customers, and the replenishment policy of DCs such that the overall cost is minimized. However, due to the JR-LIP's difficult mathematical properties, simple and effective solutions for this NP-hard problem have eluded researchers. To find an effective approach for the JR-LIP, a hybrid self-adapting differential evolution algorithm (HSDE) is designed. To verify the effectiveness of the HSDE, two intelligent algorithms that have been proven to be effective algorithms for the similar problems named genetic algorithm (GA) and hybrid DE (HDE) are chosen to compare with it. Comparative results of benchmark functions and randomly generated JR-LIPs show that HSDE outperforms GA and HDE. Moreover, a sensitive analysis of cost parameters reveals the useful managerial insight. All comparative results show that HSDE is more stable and robust in handling this complex problem especially for the large-scale problem. PMID:24453822

  16. Network-Physics (NP) BEC DIGITAL(#)-VULNERABILITY; ``Q-Computing"=Simple-Arithmetic;Modular-Congruences=SignalXNoise PRODUCTS=Clock-model;BEC-Factorization;RANDOM-# Definition;P=/=NP TRIVIAL Proof!!!

    NASA Astrophysics Data System (ADS)

    Pi, E. I.; Siegel, E.

    2010-03-01

    Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).

  17. The cost-constrained traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP.more » We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.« less

  18. RNA motif search with data-driven element ordering.

    PubMed

    Rampášek, Ladislav; Jimenez, Randi M; Lupták, Andrej; Vinař, Tomáš; Brejová, Broňa

    2016-05-18

    In this paper, we study the problem of RNA motif search in long genomic sequences. This approach uses a combination of sequence and structure constraints to uncover new distant homologs of known functional RNAs. The problem is NP-hard and is traditionally solved by backtracking algorithms. We have designed a new algorithm for RNA motif search and implemented a new motif search tool RNArobo. The tool enhances the RNAbob descriptor language, allowing insertions in helices, which enables better characterization of ribozymes and aptamers. A typical RNA motif consists of multiple elements and the running time of the algorithm is highly dependent on their ordering. By approaching the element ordering problem in a principled way, we demonstrate more than 100-fold speedup of the search for complex motifs compared to previously published tools. We have developed a new method for RNA motif search that allows for a significant speedup of the search of complex motifs that include pseudoknots. Such speed improvements are crucial at a time when the rate of DNA sequencing outpaces growth in computing. RNArobo is available at http://compbio.fmph.uniba.sk/rnarobo .

  19. A Dynamic Programming Approach for Base Station Sleeping in Cellular Networks

    NASA Astrophysics Data System (ADS)

    Gong, Jie; Zhou, Sheng; Niu, Zhisheng

    The energy consumption of the information and communication technology (ICT) industry, which has become a serious problem, is mostly due to the network infrastructure rather than the mobile terminals. In this paper, we focus on reducing the energy consumption of base stations (BSs) by adjusting their working modes (active or sleep). Specifically, the objective is to minimize the energy consumption while satisfying quality of service (QoS, e.g., blocking probability) requirement and, at the same time, avoiding frequent mode switching to reduce signaling and delay overhead. The problem is modeled as a dynamic programming (DP) problem, which is NP-hard in general. Based on cooperation among neighboring BSs, a low-complexity algorithm is proposed to reduce the size of state space as well as that of action space. Simulations demonstrate that, with the proposed algorithm, the active BS pattern well meets the time variation and the non-uniform spatial distribution of system traffic. Moreover, the tradeoff between the energy saving from BS sleeping and the cost of switching is well balanced by the proposed scheme.

  20. Achieving microaggregation for secure statistical databases using fixed-structure partitioning-based learning automata.

    PubMed

    Fayyoumi, Ebaa; Oommen, B John

    2009-10-01

    We consider the microaggregation problem (MAP) that involves partitioning a set of individual records in a microdata file into a number of mutually exclusive and exhaustive groups. This problem, which seeks for the best partition of the microdata file, is known to be NP-hard and has been tackled using many heuristic solutions. In this paper, we present the first reported fixed-structure-stochastic-automata-based solution to this problem. The newly proposed method leads to a lower value of the information loss (IL), obtains a better tradeoff between the IL and the disclosure risk (DR) when compared with state-of-the-art methods, and leads to a superior value of the scoring index, which is a criterion involving a combination of the IL and the DR. The scheme has been implemented, tested, and evaluated for different real-life and simulated data sets. The results clearly demonstrate the applicability of learning automata to the MAP and its ability to yield a solution that obtains the best tradeoff between IL and DR when compared with the state of the art.

  1. Influencing Busy People in a Social Network

    PubMed Central

    Sarkar, Kaushik; Sundaram, Hari

    2016-01-01

    We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach. PMID:27711127

  2. Many-to-Many Multicast Routing Schemes under a Fixed Topology

    PubMed Central

    Ding, Wei; Wang, Hongfa; Wei, Xuerui

    2013-01-01

    Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem. PMID:23589706

  3. Split diversity in constrained conservation prioritization using integer linear programming.

    PubMed

    Chernomor, Olga; Minh, Bui Quang; Forest, Félix; Klaere, Steffen; Ingram, Travis; Henzinger, Monika; von Haeseler, Arndt

    2015-01-01

    Phylogenetic diversity (PD) is a measure of biodiversity based on the evolutionary history of species. Here, we discuss several optimization problems related to the use of PD, and the more general measure split diversity (SD), in conservation prioritization.Depending on the conservation goal and the information available about species, one can construct optimization routines that incorporate various conservation constraints. We demonstrate how this information can be used to select sets of species for conservation action. Specifically, we discuss the use of species' geographic distributions, the choice of candidates under economic pressure, and the use of predator-prey interactions between the species in a community to define viability constraints.Despite such optimization problems falling into the area of NP hard problems, it is possible to solve them in a reasonable amount of time using integer programming. We apply integer linear programming to a variety of models for conservation prioritization that incorporate the SD measure.We exemplarily show the results for two data sets: the Cape region of South Africa and a Caribbean coral reef community. Finally, we provide user-friendly software at http://www.cibiv.at/software/pda.

  4. From Constraints to Resolution Rules Part II : chains, braids, confluence and T&E

    NASA Astrophysics Data System (ADS)

    Berthier, Denis

    In this Part II, we apply the general theory developed in Part I to a detailed analysis of the Constraint Satisfaction Problem (CSP). We show how specific types of resolution rules can be defined. In particular, we introduce the general notions of a chain and a braid. As in Part I, these notions are illustrated in detail with the Sudoku example - a problem known to be NP-complete and which is therefore typical of a broad class of hard problems. For Sudoku, we also show how far one can go in "approximating" a CSP with a resolution theory and we give an empirical statistical analysis of how the various puzzles, corresponding to different sets of entries, can be classified along a natural scale of complexity. For any CSP, we also prove the confluence property of some Resolution Theories based on braids and we show how it can be used to define different resolution strategies. Finally, we prove that, in any CSP, braids have the same solving capacity as Trial-and-Error (T&E) with no guessing and we comment this result in the Sudoku case.

  5. Multi-objective based spectral unmixing for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei

    2017-02-01

    Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.

  6. Influencing Busy People in a Social Network.

    PubMed

    Sarkar, Kaushik; Sundaram, Hari

    2016-01-01

    We identify influential early adopters in a social network, where individuals are resource constrained, to maximize the spread of multiple, costly behaviors. A solution to this problem is especially important for viral marketing. The problem of maximizing influence in a social network is challenging since it is computationally intractable. We make three contributions. First, we propose a new model of collective behavior that incorporates individual intent, knowledge of neighbors actions and resource constraints. Second, we show that the multiple behavior influence maximization is NP-hard. Furthermore, we show that the problem is submodular, implying the existence of a greedy solution that approximates the optimal solution to within a constant. However, since the greedy algorithm is expensive for large networks, we propose efficient heuristics to identify the influential individuals, including heuristics to assign behaviors to the different early adopters. We test our approach on synthetic and real-world topologies with excellent results. We evaluate the effectiveness under three metrics: unique number of participants, total number of active behaviors and network resource utilization. Our heuristics produce 15-51% increase in expected resource utilization over the naïve approach.

  7. Learning algorithm in restricted Boltzmann machines using Kullback-Leibler importance estimation procedure

    NASA Astrophysics Data System (ADS)

    Yasuda, Muneki; Sakurai, Tetsuharu; Tanaka, Kazuyuki

    Restricted Boltzmann machines (RBMs) are bipartite structured statistical neural networks and consist of two layers. One of them is a layer of visible units and the other one is a layer of hidden units. In each layer, any units do not connect to each other. RBMs have high flexibility and rich structure and have been expected to applied to various applications, for example, image and pattern recognitions, face detections and so on. However, most of computational models in RBMs are intractable and often belong to the class of NP-hard problem. In this paper, in order to construct a practical learning algorithm for them, we employ the Kullback-Leibler Importance Estimation Procedure (KLIEP) to RBMs, and give a new scheme of practical approximate learning algorithm for RBMs based on the KLIEP.

  8. Nanoparticles-cell association predicted by protein corona fingerprints

    NASA Astrophysics Data System (ADS)

    Palchetti, S.; Digiacomo, L.; Pozzi, D.; Peruzzi, G.; Micarelli, E.; Mahmoudi, M.; Caracciolo, G.

    2016-06-01

    In a physiological environment (e.g., blood and interstitial fluids) nanoparticles (NPs) will bind proteins shaping a ``protein corona'' layer. The long-lived protein layer tightly bound to the NP surface is referred to as the hard corona (HC) and encodes information that controls NP bioactivity (e.g. cellular association, cellular signaling pathways, biodistribution, and toxicity). Decrypting this complex code has become a priority to predict the NP biological outcomes. Here, we use a library of 16 lipid NPs of varying size (Ø ~ 100-250 nm) and surface chemistry (unmodified and PEGylated) to investigate the relationships between NP physicochemical properties (nanoparticle size, aggregation state and surface charge), protein corona fingerprints (PCFs), and NP-cell association. We found out that none of the NPs' physicochemical properties alone was exclusively able to account for association with human cervical cancer cell line (HeLa). For the entire library of NPs, a total of 436 distinct serum proteins were detected. We developed a predictive-validation modeling that provides a means of assessing the relative significance of the identified corona proteins. Interestingly, a minor fraction of the HC, which consists of only 8 PCFs were identified as main promoters of NP association with HeLa cells. Remarkably, identified PCFs have several receptors with high level of expression on the plasma membrane of HeLa cells.In a physiological environment (e.g., blood and interstitial fluids) nanoparticles (NPs) will bind proteins shaping a ``protein corona'' layer. The long-lived protein layer tightly bound to the NP surface is referred to as the hard corona (HC) and encodes information that controls NP bioactivity (e.g. cellular association, cellular signaling pathways, biodistribution, and toxicity). Decrypting this complex code has become a priority to predict the NP biological outcomes. Here, we use a library of 16 lipid NPs of varying size (Ø ~ 100-250 nm) and surface chemistry (unmodified and PEGylated) to investigate the relationships between NP physicochemical properties (nanoparticle size, aggregation state and surface charge), protein corona fingerprints (PCFs), and NP-cell association. We found out that none of the NPs' physicochemical properties alone was exclusively able to account for association with human cervical cancer cell line (HeLa). For the entire library of NPs, a total of 436 distinct serum proteins were detected. We developed a predictive-validation modeling that provides a means of assessing the relative significance of the identified corona proteins. Interestingly, a minor fraction of the HC, which consists of only 8 PCFs were identified as main promoters of NP association with HeLa cells. Remarkably, identified PCFs have several receptors with high level of expression on the plasma membrane of HeLa cells. Electronic supplementary information (ESI) available: Table S1. Cell viability (%) and cell association of the different nanoparticles used. Table S2. Total number of identified proteins on the different nanoparticles used. Tables S3-S18. Top 25 most abundant corona proteins identified in the protein corona of nanoparticles NP2-NP16 following 1 hour incubation with HP. Table S19. List of descriptors used. Table S20. Potential targets of protein corona fingerprints with its own interaction score (mentha) and the expression median value in Hela cells. Fig. S1 and S2. Effect of exposure to human plasma on size and zeta potential of NPs. Fig. S3. Predictive modeling of nanoparticle-cell association. See DOI: 10.1039/c6nr03898k

  9. Impact of nonconductive powder on electrostatic separation for recycling crushed waste printed circuit board.

    PubMed

    Wu, Jiang; Qin, Yufei; Zhou, Quan; Xu, Zhenming

    2009-05-30

    The electrostatic separation is an effective and environmentally friendly method for recycling metals and nonmetals from crushed printed circuit board (PCB) wastes. However, it still confronts some problems brought by nonconductive powder (NP). Firstly, the NP is fine and liable to aggregate. This leads to an increase of middling products and loss of metals. Secondly, the stability of separation process is influenced by NP. Finally, some NPs accumulate on the surface of the corona and electrostatic electrodes during the process. These problems lead to an inefficient separation. In the present research, the impacts of NP on electrostatic separation are investigated. The experimental results show that: the separation is notably influenced when the NP content is more than 10%. With the increase of NP content, the middling products sharply increase from 1.4 g to 4.3g (increase 207.1%), while the conductive products decrease from 24.0 g to 19.1g (decrease 20.4%), and the separation process become more instable.

  10. Amoeba-inspired nanoarchitectonic computing: solving intractable computational problems using nanoscale photoexcitation transfer dynamics.

    PubMed

    Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko

    2013-06-18

    Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.

  11. End-to-End Network QoS via Scheduling of Flexible Resource Reservation Requests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, S.; Katramatos, D.; Yu, D.

    2011-11-14

    Modern data-intensive applications move vast amounts of data between multiple locations around the world. To enable predictable and reliable data transfer, next generation networks allow such applications to reserve network resources for exclusive use. In this paper, we solve an important problem (called SMR3) to accommodate multiple and concurrent network reservation requests between a pair of end-sites. Given the varying availability of bandwidth within the network, our goal is to accommodate as many reservation requests as possible while minimizing the total time needed to complete the data transfers. We first prove that SMR3 is an NP-hard problem. Then we solvemore » it by developing a polynomial-time heuristic, called RRA. The RRA algorithm hinges on an efficient mechanism to accommodate large number of requests by minimizing the bandwidth wastage. Finally, via numerical results, we show that RRA constructs schedules that accommodate significantly larger number of requests compared to other, seemingly efficient, heuristics.« less

  12. Algorithms and Complexity Results for Genome Mapping Problems.

    PubMed

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  13. An Exact Algorithm to Compute the Double-Cut-and-Join Distance for Genomes with Duplicate Genes.

    PubMed

    Shao, Mingfu; Lin, Yu; Moret, Bernard M E

    2015-05-01

    Computing the edit distance between two genomes is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ) model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be computed in linear time for genomes without duplicate genes, while the problem becomes NP-hard in the presence of duplicate genes. In this article, we propose an integer linear programming (ILP) formulation to compute the DCJ distance between two genomes with duplicate genes. We also provide an efficient preprocessing approach to simplify the ILP formulation while preserving optimality. Comparison on simulated genomes demonstrates that our method outperforms MSOAR in computing the edit distance, especially when the genomes contain long duplicated segments. We also apply our method to assign orthologous gene pairs among human, mouse, and rat genomes, where once again our method outperforms MSOAR.

  14. Study on probability distributions for evolution in modified extremal optimization

    NASA Astrophysics Data System (ADS)

    Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian

    2010-05-01

    It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.

  15. CE dual-homing protection in layer 1 VPN

    NASA Astrophysics Data System (ADS)

    Du, Shu; Peng, Yunfeng; Long, Keping

    2008-11-01

    Layer 1 VPN (L1VPN) extends the notion of VPN to the optical domain to provide virtually dedicated circuit like leased lines, so that the security is more enhanced. Despite their secure gains from channel isolation, VPNs still suffer fragilities resulting from link-failures or node-failures. Extensive activities on survivability designs for wavelength-routed optical networks are proposed, including various protection and restoration schemes, but concerns on network edge are rare. Dual-homing is an effective skill to achieve survivability gains for L1VPNs. There are two dual-homing mode: Active/Standby mode and Load-Sharing mode In this paper, we investigate the problem of PE assignment, which is the key of dual-homing design and is NP-hard. We formulate it as an integer programming problem, and propose heuristic solutions. Simulation results show that the proposed solutions work in a correct and effective way and the Load-Sharing mode has higher bandwidth efficiency than Active/Standby mode.

  16. Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.

    PubMed

    Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing

    2017-04-20

    The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.

  17. Interactive machine learning for health informatics: when do we need the human-in-the-loop?

    PubMed

    Holzinger, Andreas

    2016-06-01

    Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as "algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human." This "human-in-the-loop" can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.

  18. Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model

    NASA Astrophysics Data System (ADS)

    Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled

    2018-03-01

    The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.

  19. Associating optical measurements of MEO and GEO objects using Population-Based Meta-Heuristic methods

    NASA Astrophysics Data System (ADS)

    Zittersteijn, M.; Vananti, A.; Schildknecht, T.; Dolado Perez, J. C.; Martinot, V.

    2016-11-01

    Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). The MTT problem quickly becomes an NP-hard combinatorial optimization problem. This means that the effort required to solve the MTT problem increases exponentially with the number of tracked objects. In an attempt to find an approximate solution of sufficient quality, several Population-Based Meta-Heuristic (PBMH) algorithms are implemented and tested on simulated optical measurements. These first results show that one of the tested algorithms, namely the Elitist Genetic Algorithm (EGA), consistently displays the desired behavior of finding good approximate solutions before reaching the optimum. The results further suggest that the algorithm possesses a polynomial time complexity, as the computation times are consistent with a polynomial model. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the association and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.

  20. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  1. Generating effective project scheduling heuristics by abstraction and reconstitution

    NASA Technical Reports Server (NTRS)

    Janakiraman, Bhaskar; Prieditis, Armand

    1992-01-01

    A project scheduling problem consists of a finite set of jobs, each with fixed integer duration, requiring one or more resources such as personnel or equipment, and each subject to a set of precedence relations, which specify allowable job orderings, and a set of mutual exclusion relations, which specify jobs that cannot overlap. No job can be interrupted once started. The objective is to minimize project duration. This objective arises in nearly every large construction project--from software to hardware to buildings. Because such project scheduling problems are NP-hard, they are typically solved by branch-and-bound algorithms. In these algorithms, lower-bound duration estimates (admissible heuristics) are used to improve efficiency. One way to obtain an admissible heuristic is to remove (abstract) all resources and mutual exclusion constraints and then obtain the minimal project duration for the abstracted problem; this minimal duration is the admissible heuristic. Although such abstracted problems can be solved efficiently, they yield inaccurate admissible heuristics precisely because those constraints that are central to solving the original problem are abstracted. This paper describes a method to reconstitute the abstracted constraints back into the solution to the abstracted problem while maintaining efficiency, thereby generating better admissible heuristics. Our results suggest that reconstitution can make good admissible heuristics even better.

  2. Colored Traveling Salesman Problem.

    PubMed

    Li, Jun; Zhou, MengChu; Sun, Qirui; Dai, Xianzhong; Yu, Xiaolong

    2015-11-01

    The multiple traveling salesman problem (MTSP) is an important combinatorial optimization problem. It has been widely and successfully applied to the practical cases in which multiple traveling individuals (salesmen) share the common workspace (city set). However, it cannot represent some application problems where multiple traveling individuals not only have their own exclusive tasks but also share a group of tasks with each other. This work proposes a new MTSP called colored traveling salesman problem (CTSP) for handling such cases. Two types of city groups are defined, i.e., each group of exclusive cities of a single color for a salesman to visit and a group of shared cities of multiple colors allowing all salesmen to visit. Evidences show that CTSP is NP-hard and a multidepot MTSP and multiple single traveling salesman problems are its special cases. We present a genetic algorithm (GA) with dual-chromosome coding for CTSP and analyze the corresponding solution space. Then, GA is improved by incorporating greedy, hill-climbing (HC), and simulated annealing (SA) operations to achieve better performance. By experiments, the limitation of the exact solution method is revealed and the performance of the presented GAs is compared. The results suggest that SAGA can achieve the best quality of solutions and HCGA should be the choice making good tradeoff between the solution quality and computing time.

  3. Inferring species trees from incongruent multi-copy gene trees using the Robinson-Foulds distance

    PubMed Central

    2013-01-01

    Background Constructing species trees from multi-copy gene trees remains a challenging problem in phylogenetics. One difficulty is that the underlying genes can be incongruent due to evolutionary processes such as gene duplication and loss, deep coalescence, or lateral gene transfer. Gene tree estimation errors may further exacerbate the difficulties of species tree estimation. Results We present a new approach for inferring species trees from incongruent multi-copy gene trees that is based on a generalization of the Robinson-Foulds (RF) distance measure to multi-labeled trees (mul-trees). We prove that it is NP-hard to compute the RF distance between two mul-trees; however, it is easy to calculate this distance between a mul-tree and a singly-labeled species tree. Motivated by this, we formulate the RF problem for mul-trees (MulRF) as follows: Given a collection of multi-copy gene trees, find a singly-labeled species tree that minimizes the total RF distance from the input mul-trees. We develop and implement a fast SPR-based heuristic algorithm for the NP-hard MulRF problem. We compare the performance of the MulRF method (available at http://genome.cs.iastate.edu/CBL/MulRF/) with several gene tree parsimony approaches using gene tree simulations that incorporate gene tree error, gene duplications and losses, and/or lateral transfer. The MulRF method produces more accurate species trees than gene tree parsimony approaches. We also demonstrate that the MulRF method infers in minutes a credible plant species tree from a collection of nearly 2,000 gene trees. Conclusions Our new phylogenetic inference method, based on a generalized RF distance, makes it possible to quickly estimate species trees from large genomic data sets. Since the MulRF method, unlike gene tree parsimony, is based on a generic tree distance measure, it is appealing for analyses of genomic data sets, in which many processes such as deep coalescence, recombination, gene duplication and losses as well as phylogenetic error may contribute to gene tree discord. In experiments, the MulRF method estimated species trees accurately and quickly, demonstrating MulRF as an efficient alternative approach for phylogenetic inference from large-scale genomic data sets. PMID:24180377

  4. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    PubMed

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  5. Ant Colony Optimization Algorithm for Centralized Dynamic Channel Allocation in Multi-Cell OFDMA Systems

    NASA Astrophysics Data System (ADS)

    Kim, Hyo-Su; Kim, Dong-Hoi

    The dynamic channel allocation (DCA) scheme in multi-cell systems causes serious inter-cell interference (ICI) problem to some existing calls when channels for new calls are allocated. Such a problem can be addressed by advanced centralized DCA design that is able to minimize ICI. Thus, in this paper, a centralized DCA is developed for the downlink of multi-cell orthogonal frequency division multiple access (OFDMA) systems with full spectral reuse. However, in practice, as the search space of channel assignment for centralized DCA scheme in multi-cell systems grows exponentially with the increase of the number of required calls, channels, and cells, it becomes an NP-hard problem and is currently too complicated to find an optimum channel allocation. In this paper, we propose an ant colony optimization (ACO) based DCA scheme using a low-complexity ACO algorithm which is a kind of heuristic algorithm in order to solve the aforementioned problem. Simulation results demonstrate significant performance improvements compared to the existing schemes in terms of the grade of service (GoS) performance and the forced termination probability of existing calls without degrading the system performance of the average throughput.

  6. Two combinatorial optimization problems for SNP discovery using base-specific cleavage and mass spectrometry.

    PubMed

    Chen, Xin; Wu, Qiong; Sun, Ruimin; Zhang, Louxin

    2012-01-01

    The discovery of single-nucleotide polymorphisms (SNPs) has important implications in a variety of genetic studies on human diseases and biological functions. One valuable approach proposed for SNP discovery is based on base-specific cleavage and mass spectrometry. However, it is still very challenging to achieve the full potential of this SNP discovery approach. In this study, we formulate two new combinatorial optimization problems. While both problems are aimed at reconstructing the sample sequence that would attain the minimum number of SNPs, they search over different candidate sequence spaces. The first problem, denoted as SNP - MSP, limits its search to sequences whose in silico predicted mass spectra have all their signals contained in the measured mass spectra. In contrast, the second problem, denoted as SNP - MSQ, limits its search to sequences whose in silico predicted mass spectra instead contain all the signals of the measured mass spectra. We present an exact dynamic programming algorithm for solving the SNP - MSP problem and also show that the SNP - MSQ problem is NP-hard by a reduction from a restricted variation of the 3-partition problem. We believe that an efficient solution to either problem above could offer a seamless integration of information in four complementary base-specific cleavage reactions, thereby improving the capability of the underlying biotechnology for sensitive and accurate SNP discovery.

  7. A hybrid meta-heuristic algorithm for the vehicle routing problem with stochastic travel times considering the driver's satisfaction

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Alinaghian, Mehdi; Salamat-Bakhsh, Alireza; Norouzi, Narges

    2012-05-01

    A vehicle routing problem is a significant problem that has attracted great attention from researchers in recent years. The main objectives of the vehicle routing problem are to minimize the traveled distance, total traveling time, number of vehicles and cost function of transportation. Reducing these variables leads to decreasing the total cost and increasing the driver's satisfaction level. On the other hand, this satisfaction, which will decrease by increasing the service time, is considered as an important logistic problem for a company. The stochastic time dominated by a probability variable leads to variation of the service time, while it is ignored in classical routing problems. This paper investigates the problem of the increasing service time by using the stochastic time for each tour such that the total traveling time of the vehicles is limited to a specific limit based on a defined probability. Since exact solutions of the vehicle routing problem that belong to the category of NP-hard problems are not practical in a large scale, a hybrid algorithm based on simulated annealing with genetic operators was proposed to obtain an efficient solution with reasonable computational cost and time. Finally, for some small cases, the related results of the proposed algorithm were compared with results obtained by the Lingo 8 software. The obtained results indicate the efficiency of the proposed hybrid simulated annealing algorithm.

  8. Querying graphs in protein-protein interactions networks using feedback vertex set.

    PubMed

    Blin, Guillaume; Sikora, Florian; Vialette, Stéphane

    2010-01-01

    Recent techniques increase rapidly the amount of our knowledge on interactions between proteins. The interpretation of these new information depends on our ability to retrieve known substructures in the data, the Protein-Protein Interactions (PPIs) networks. In an algorithmic point of view, it is an hard task since it often leads to NP-hard problems. To overcome this difficulty, many authors have provided tools for querying patterns with a restricted topology, i.e., paths or trees in PPI networks. Such restriction leads to the development of fixed parameter tractable (FPT) algorithms, which can be practicable for restricted sizes of queries. Unfortunately, Graph Homomorphism is a W[1]-hard problem, and hence, no FPT algorithm can be found when patterns are in the shape of general graphs. However, Dost et al. gave an algorithm (which is not implemented) to query graphs with a bounded treewidth in PPI networks (the treewidth of the query being involved in the time complexity). In this paper, we propose another algorithm for querying pattern in the shape of graphs, also based on dynamic programming and the color-coding technique. To transform graphs queries into trees without loss of informations, we use feedback vertex set coupled to a node duplication mechanism. Hence, our algorithm is FPT for querying graphs with a bounded size of their feedback vertex set. It gives an alternative to the treewidth parameter, which can be better or worst for a given query. We provide a python implementation which allows us to validate our implementation on real data. Especially, we retrieve some human queries in the shape of graphs into the fly PPI network.

  9. Parameterization and prediction of nanoparticle transport in porous media: A reanalysis using artificial neural network

    NASA Astrophysics Data System (ADS)

    Babakhani, Peyman; Bridge, Jonathan; Doong, Ruey-an; Phenrat, Tanapon

    2017-06-01

    The continuing rapid expansion of industrial and consumer processes based on nanoparticles (NP) necessitates a robust model for delineating their fate and transport in groundwater. An ability to reliably specify the full parameter set for prediction of NP transport using continuum models is crucial. In this paper we report the reanalysis of a data set of 493 published column experiment outcomes together with their continuum modeling results. Experimental properties were parameterized into 20 factors which are commonly available. They were then used to predict five key continuum model parameters as well as the effluent concentration via artificial neural network (ANN)-based correlations. The Partial Derivatives (PaD) technique and Monte Carlo method were used for the analysis of sensitivities and model-produced uncertainties, respectively. The outcomes shed light on several controversial relationships between the parameters, e.g., it was revealed that the trend of Katt with average pore water velocity was positive. The resulting correlations, despite being developed based on a "black-box" technique (ANN), were able to explain the effects of theoretical parameters such as critical deposition concentration (CDC), even though these parameters were not explicitly considered in the model. Porous media heterogeneity was considered as a parameter for the first time and showed sensitivities higher than those of dispersivity. The model performance was validated well against subsets of the experimental data and was compared with current models. The robustness of the correlation matrices was not completely satisfactory, since they failed to predict the experimental breakthrough curves (BTCs) at extreme values of ionic strengths.

  10. TRStalker: an efficient heuristic for finding fuzzy tandem repeats.

    PubMed

    Pellegrini, Marco; Renda, M Elena; Vecchio, Alessio

    2010-06-15

    Genomes in higher eukaryotic organisms contain a substantial amount of repeated sequences. Tandem Repeats (TRs) constitute a large class of repetitive sequences that are originated via phenomena such as replication slippage and are characterized by close spatial contiguity. They play an important role in several molecular regulatory mechanisms, and also in several diseases (e.g. in the group of trinucleotide repeat disorders). While for TRs with a low or medium level of divergence the current methods are rather effective, the problem of detecting TRs with higher divergence (fuzzy TRs) is still open. The detection of fuzzy TRs is propaedeutic to enriching our view of their role in regulatory mechanisms and diseases. Fuzzy TRs are also important as tools to shed light on the evolutionary history of the genome, where higher divergence correlates with more remote duplication events. We have developed an algorithm (christened TRStalker) with the aim of detecting efficiently TRs that are hard to detect because of their inherent fuzziness, due to high levels of base substitutions, insertions and deletions. To attain this goal, we developed heuristics to solve a Steiner version of the problem for which the fuzziness is measured with respect to a motif string not necessarily present in the input string. This problem is akin to the 'generalized median string' that is known to be an NP-hard problem. Experiments with both synthetic and biological sequences demonstrate that our method performs better than current state of the art for fuzzy TRs and that the fuzzy TRs of the type we detect are indeed present in important biological sequences. TRStalker will be integrated in the web-based TRs Discovery Service (TReaDS) at bioalgo.iit.cnr.it. Supplementary data are available at Bioinformatics online.

  11. In situ frustum indentation of nanoporous copper thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ran; Pathak, Siddhartha; Mook, William M.

    Mechanical properties of thin films are often obtained solely from nanoindentation. At the same time, such measurements are characterized by a substantial amount of uncertainty, especially when mean pressure or hardness are used to infer uniaxial yield stress. In this paper we demonstrate that indentation with a pyramidal flat tip (frustum) indenter near the free edge of a sample can provide a significantly better estimate of the uniaxial yield strength compared to frequently used Berkovich indenter. This is first demonstrated using a numerical model for a material with an isotropic pressure sensitive yield criterion. Numerical simulations confirm that the indentermore » geometry provides a clear distinction of the mean pressure at which a material transitions to inelastic behavior. The mean critical pressure is highly dependent on the plastic Poisson ratio ν p so that at the 1% offset of normalized indent depth, the critical pressure p m c normalized to the uniaxial yield strength σ 0 is 1 < p m c/σ 0 < 1.3 for materials with 0 < ν p < 0.5. Choice of a frustum over Berkovich indenter reduces uncertainty in hardness by a factor of 3. These results are used to interpret frustum indentation experiments on nanoporous (NP) Copper with struts of typical diameter of 45 nm. An estimate of the yield strength of NP Copper is obtained 230 MPa < σ 0 < 300 MPa. Edge indentation further allows one to obtain in-plane strain maps near the critical pressure. Finally, comparison of the experimentally obtained in-plane strain maps of NP Cu during deformation and the strain field for different plastic Poisson ratios suggest that this material has a plastic Poisson ratio of the order of 0.2–0.3. However, existing constitutive models may not adequately capture post-yield behavior of NP metals.« less

  12. Evidence for TiO2 nanoparticle transfer in a hard-rock aquifer.

    PubMed

    Cary, Lise; Pauwels, Hélène; Ollivier, Patrick; Picot, Géraldine; Leroy, Philippe; Mougin, Bruno; Braibant, Gilles; Labille, Jérôme

    2015-08-01

    Water flow and TiO2 nanoparticle (NP) transfer in a fractured hard-rock aquifer were studied in a tracer test experiment at a pilot site in Brittany, France. Results from the Br tracer test show that the schist aquifer can be represented by a two-layer medium comprising i) fractures with low longitudinal dispersivity in which water and solute transport is relatively fast, and ii) a network of small fissures with high longitudinal dispersivity in which transport is slower. Although a large amount of NPs was retained within the aquifer, a significant TiO2 concentration was measured in a well 15m downstream of the NP injection well, clearly confirming the potential for TiO2 NPs to be transported in groundwater. The Ti concentration profile in the downstream well was modelled using a two-layer medium approach. The delay used for the TiO2 NPs simulation compared to the Br concentration profiles in the downstream well indicate that the aggregated TiO2 NPs interacted with the rock. Unlike Br, NPs do not penetrate the entire pore network during transfer because of electrostatic interactions between NP aggregates and the rock and also to the aggregate size and the hydrodynamic conditions, especially where the porosity is very low; NPs with a weak negative charge can be attached onto the rock surface, and more particularly onto the positively charged iron oxyhydroxides coating the main pathways due to natural denitrification. Nevertheless, TiO2 NPs are mobile and transfer within fracture and fissure media. Any modification of the aquifer's chemical conditions is likely to impact the groundwater pH and, the nitrate content and the denitrification process, and thus affect NP aggregation and attachment. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. In situ frustum indentation of nanoporous copper thin films

    DOE PAGES

    Liu, Ran; Pathak, Siddhartha; Mook, William M.; ...

    2017-07-24

    Mechanical properties of thin films are often obtained solely from nanoindentation. At the same time, such measurements are characterized by a substantial amount of uncertainty, especially when mean pressure or hardness are used to infer uniaxial yield stress. In this paper we demonstrate that indentation with a pyramidal flat tip (frustum) indenter near the free edge of a sample can provide a significantly better estimate of the uniaxial yield strength compared to frequently used Berkovich indenter. This is first demonstrated using a numerical model for a material with an isotropic pressure sensitive yield criterion. Numerical simulations confirm that the indentermore » geometry provides a clear distinction of the mean pressure at which a material transitions to inelastic behavior. The mean critical pressure is highly dependent on the plastic Poisson ratio ν p so that at the 1% offset of normalized indent depth, the critical pressure p m c normalized to the uniaxial yield strength σ 0 is 1 < p m c/σ 0 < 1.3 for materials with 0 < ν p < 0.5. Choice of a frustum over Berkovich indenter reduces uncertainty in hardness by a factor of 3. These results are used to interpret frustum indentation experiments on nanoporous (NP) Copper with struts of typical diameter of 45 nm. An estimate of the yield strength of NP Copper is obtained 230 MPa < σ 0 < 300 MPa. Edge indentation further allows one to obtain in-plane strain maps near the critical pressure. Finally, comparison of the experimentally obtained in-plane strain maps of NP Cu during deformation and the strain field for different plastic Poisson ratios suggest that this material has a plastic Poisson ratio of the order of 0.2–0.3. However, existing constitutive models may not adequately capture post-yield behavior of NP metals.« less

  14. Exact solutions for species tree inference from discordant gene trees.

    PubMed

    Chang, Wen-Chieh; Górecki, Paweł; Eulenstein, Oliver

    2013-10-01

    Phylogenetic analysis has to overcome the grant challenge of inferring accurate species trees from evolutionary histories of gene families (gene trees) that are discordant with the species tree along whose branches they have evolved. Two well studied approaches to cope with this challenge are to solve either biologically informed gene tree parsimony (GTP) problems under gene duplication, gene loss, and deep coalescence, or the classic RF supertree problem that does not rely on any biological model. Despite the potential of these problems to infer credible species trees, they are NP-hard. Therefore, these problems are addressed by heuristics that typically lack any provable accuracy and precision. We describe fast dynamic programming algorithms that solve the GTP problems and the RF supertree problem exactly, and demonstrate that our algorithms can solve instances with data sets consisting of as many as 22 taxa. Extensions of our algorithms can also report the number of all optimal species trees, as well as the trees themselves. To better asses the quality of the resulting species trees that best fit the given gene trees, we also compute the worst case species trees, their numbers, and optimization score for each of the computational problems. Finally, we demonstrate the performance of our exact algorithms using empirical and simulated data sets, and analyze the quality of heuristic solutions for the studied problems by contrasting them with our exact solutions.

  15. Fits of weak annihilation and hard spectator scattering corrections in B u,d \\wideoverrightarrow VV decays

    NASA Astrophysics Data System (ADS)

    Chang, Qin; Li, Xiao-Nan; Sun, Jun-Feng; Yang, Yue-Ling

    2016-10-01

    In this paper, the contributions of weak annihilation and hard spectator scattering in B\\to ρ {K}* , {K}* {\\bar{K}}* , φ {K}* , ρ ρ and φ φ decays are investigated within the framework of quantum chromodynamics factorization. Using the experimental data available, we perform {χ }2 analyses of end-point parameters in four cases based on the topology-dependent and polarization-dependent parameterization schemes. The fitted results indicate that: (i) in the topology-dependent scheme, the relation ({ρ }Ai,{φ }Ai)\

  16. Parameterizing Coefficients of a POD-Based Dynamical System

    NASA Technical Reports Server (NTRS)

    Kalb, Virginia L.

    2010-01-01

    A method of parameterizing the coefficients of a dynamical system based of a proper orthogonal decomposition (POD) representing the flow dynamics of a viscous fluid has been introduced. (A brief description of POD is presented in the immediately preceding article.) The present parameterization method is intended to enable construction of the dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers. The need for this or a similar method arises as follows: A procedure that includes direct numerical simulation followed by POD, followed by Galerkin projection to a dynamical system has been proven to enable representation of flow dynamics by a low-dimensional model at the Reynolds number of the simulation. However, a more difficult task is to obtain models that are valid over a range of Reynolds numbers. Extrapolation of low-dimensional models by use of straightforward Reynolds-number-based parameter continuation has proven to be inadequate for successful prediction of flows. A key part of the problem of constructing a dynamical system to accurately represent the temporal evolution of the flow dynamics over a range of Reynolds numbers is the problem of understanding and providing for the variation of the coefficients of the dynamical system with the Reynolds number. Prior methods do not enable capture of temporal dynamics over ranges of Reynolds numbers in low-dimensional models, and are not even satisfactory when large numbers of modes are used. The basic idea of the present method is to solve the problem through a suitable parameterization of the coefficients of the dynamical system. The parameterization computations involve utilization of the transfer of kinetic energy between modes as a function of Reynolds number. The thus-parameterized dynamical system accurately predicts the flow dynamics and is applicable to a range of flow problems in the dynamical regime around the Hopf bifurcation. Parameter-continuation software can be used on the parameterized dynamical system to derive a bifurcation diagram that accurately predicts the temporal flow behavior.

  17. Temporal Constraint Reasoning With Preferences

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Morris, Paul; Morris, Robert; Rossi, Francesca

    2001-01-01

    A number of reasoning problems involving the manipulation of temporal information can naturally be viewed as implicitly inducing an ordering of potential local decisions involving time (specifically, associated with durations or orderings of events) on the basis of preferences. For example. a pair of events might be constrained to occur in a certain order, and, in addition. it might be preferable that the delay between them be as large, or as small, as possible. This paper explores problems in which a set of temporal constraints is specified, where each constraint is associated with preference criteria for making local decisions about the events involved in the constraint, and a reasoner must infer a complete solution to the problem such that, to the extent possible, these local preferences are met in the best way. A constraint framework for reasoning about time is generalized to allow for preferences over event distances and durations, and we study the complexity of solving problems in the resulting formalism. It is shown that while in general such problems are NP-hard, some restrictions on the shape of the preference functions, and on the structure of the preference set, can be enforced to achieve tractability. In these cases, a simple generalization of a single-source shortest path algorithm can be used to compute a globally preferred solution in polynomial time.

  18. A modified generalized extremal optimization algorithm for the quay crane scheduling problem with interference constraints

    NASA Astrophysics Data System (ADS)

    Guo, Peng; Cheng, Wenming; Wang, Yi

    2014-10-01

    The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.

  19. Discrete bacteria foraging optimization algorithm for graph based problems - a transition from continuous to discrete

    NASA Astrophysics Data System (ADS)

    Sur, Chiranjib; Shukla, Anupam

    2018-03-01

    Bacteria Foraging Optimisation Algorithm is a collective behaviour-based meta-heuristics searching depending on the social influence of the bacteria co-agents in the search space of the problem. The algorithm faces tremendous hindrance in terms of its application for discrete problems and graph-based problems due to biased mathematical modelling and dynamic structure of the algorithm. This had been the key factor to revive and introduce the discrete form called Discrete Bacteria Foraging Optimisation (DBFO) Algorithm for discrete problems which exceeds the number of continuous domain problems represented by mathematical and numerical equations in real life. In this work, we have mainly simulated a graph-based road multi-objective optimisation problem and have discussed the prospect of its utilisation in other similar optimisation problems and graph-based problems. The various solution representations that can be handled by this DBFO has also been discussed. The implications and dynamics of the various parameters used in the DBFO are illustrated from the point view of the problems and has been a combination of both exploration and exploitation. The result of DBFO has been compared with Ant Colony Optimisation and Intelligent Water Drops Algorithms. Important features of DBFO are that the bacteria agents do not depend on the local heuristic information but estimates new exploration schemes depending upon the previous experience and covered path analysis. This makes the algorithm better in combination generation for graph-based problems and combination generation for NP hard problems.

  20. Unprovable Security of Two-Message Zero Knowledge

    DTIC Science & Technology

    2012-12-19

    functions on NP-hardness. In STOC ’06, pages 701–710, 2006. 10 [BCC88] Gilles Brassard, David Chaum , and Claude Crépeau. Minimum disclosure proofs...security of several primitives (e.g., Schnorr’s identification scheme, commitment schemes secure under weak notions of selective open- ing, Chaum blind

  1. Short superstrings and the structure of overlapping strings.

    PubMed

    Armen, C; Stein, C

    1995-01-01

    Given a collection of strings S = [s1,...,sn] over an alphabet sigma, a superstring alpha of S is a string containing each si as a substring, that is, for each i, 1 < or = i < or = n, alpha contains a block of magnitude of si consecutive characters that match si exactly. The shortest superstring problem is the problem of finding a superstring alpha of minimum length. The shortest superstring problem has applications in both computational biology and data compression. The shortest superstring problem is NP-hard (Gallant et al., 1980); in fact, it was recently shown to be MAX SNP-hard (Blum et al., 1994). Given the importance of the applications, several heuristics and approximation algorithms have been proposed. Constant factor approximation algorithms have been given in Blum et al. (1994) (factor of 3), Teng and Yao (1993) (factor of 2 8/9), Czumaj et al. (1994) (factor of 2 5/6), and Kosaraju et al. (1994) (factor of 2 50/63). Informally, the key to any algorithm for the shortest superstring problem is to identify sets of strings with large amounts of similarity, or overlap. Although the previous algorithms and their analyses have grown increasingly sophisticated, they reveal remarkably little about the structure of strings with large amounts of overlap. In this sense, they are solving a more general problem than the one at hand. In this paper, we study the structure of strings with large amounts of overlap and use our understanding to give an algorithm that finds a superstring whose length is no more than 2 3/4 times that of the optimal superstring. Our algorithm runs in O(magnitude of S + n3) time, which matches that of previous algorithms. We prove several interesting properties about short periodic strings, allowing us to answer questions of the following form: Given a string with some periodic structure, characterize all the possible periodic strings that can have a large amount of overlap with the first string.

  2. Integration of Externally Carried Weapon Systems with Military Helicopters (L’Integration des Systemes d’Armes Transportes en Charge Externe sur les Helicopteres Militaires)

    DTIC Science & Technology

    1990-04-01

    Np - Yaw to Roll Derivative (Yaw/Roll Coupling) Hr - Yaw to Yaw Rate Derivative (Yaw Damping) M - Pitch to Incidence Derivative (Incidence Stability) F...system is installed in the same configuration as used on the land based vehicle, simply by bolting-on to the limited number of available hard points...helicopters not originally designed to take external loads. The limited number of hard points on the fuselage structure leads to define supports of complex

  3. Finding Maximum Cliques on the D-Wave Quantum Annealer

    DOE PAGES

    Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg; ...

    2018-05-03

    This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less

  4. Finding Maximum Cliques on the D-Wave Quantum Annealer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg

    This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less

  5. Does finite-temperature decoding deliver better optima for noisy Hamiltonians?

    NASA Astrophysics Data System (ADS)

    Ochoa, Andrew J.; Nishimura, Kohji; Nishimori, Hidetoshi; Katzgraber, Helmut G.

    The minimization of an Ising spin-glass Hamiltonian is an NP-hard problem. Because many problems across disciplines can be mapped onto this class of Hamiltonian, novel efficient computing techniques are highly sought after. The recent development of quantum annealing machines promises to minimize these difficult problems more efficiently. However, the inherent noise found in these analog devices makes the minimization procedure difficult. While the machine might be working correctly, it might be minimizing a different Hamiltonian due to the inherent noise. This means that, in general, the ground-state configuration that correctly minimizes a noisy Hamiltonian might not minimize the noise-less Hamiltonian. Inspired by rigorous results that the energy of the noise-less ground-state configuration is equal to the expectation value of the energy of the noisy Hamiltonian at the (nonzero) Nishimori temperature [J. Phys. Soc. Jpn., 62, 40132930 (1993)], we numerically study the decoding probability of the original noise-less ground state with noisy Hamiltonians in two space dimensions, as well as the D-Wave Inc. Chimera topology. Our results suggest that thermal fluctuations might be beneficial during the optimization process in analog quantum annealing machines.

  6. Minimizing communication cost among distributed controllers in software defined networks

    NASA Astrophysics Data System (ADS)

    Arlimatti, Shivaleela; Elbreiki, Walid; Hassan, Suhaidi; Habbal, Adib; Elshaikh, Mohamed

    2016-08-01

    Software Defined Networking (SDN) is a new paradigm to increase the flexibility of today's network by promising for a programmable network. The fundamental idea behind this new architecture is to simplify network complexity by decoupling control plane and data plane of the network devices, and by making the control plane centralized. Recently controllers have distributed to solve the problem of single point of failure, and to increase scalability and flexibility during workload distribution. Even though, controllers are flexible and scalable to accommodate more number of network switches, yet the problem of intercommunication cost between distributed controllers is still challenging issue in the Software Defined Network environment. This paper, aims to fill the gap by proposing a new mechanism, which minimizes intercommunication cost with graph partitioning algorithm, an NP hard problem. The methodology proposed in this paper is, swapping of network elements between controller domains to minimize communication cost by calculating communication gain. The swapping of elements minimizes inter and intra communication cost among network domains. We validate our work with the OMNeT++ simulation environment tool. Simulation results show that the proposed mechanism minimizes the inter domain communication cost among controllers compared to traditional distributed controllers.

  7. Flexible Job-Shop Scheduling with Dual-Resource Constraints to Minimize Tardiness Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Paksi, A. B. N.; Ma'ruf, A.

    2016-02-01

    In general, both machines and human resources are needed for processing a job on production floor. However, most classical scheduling problems have ignored the possible constraint caused by availability of workers and have considered only machines as a limited resource. In addition, along with production technology development, routing flexibility appears as a consequence of high product variety and medium demand for each product. Routing flexibility is caused by capability of machines that offers more than one machining process. This paper presents a method to address scheduling problem constrained by both machines and workers, considering routing flexibility. Scheduling in a Dual-Resource Constrained shop is categorized as NP-hard problem that needs long computational time. Meta-heuristic approach, based on Genetic Algorithm, is used due to its practical implementation in industry. Developed Genetic Algorithm uses indirect chromosome representative and procedure to transform chromosome into Gantt chart. Genetic operators, namely selection, elitism, crossover, and mutation are developed to search the best fitness value until steady state condition is achieved. A case study in a manufacturing SME is used to minimize tardiness as objective function. The algorithm has shown 25.6% reduction of tardiness, equal to 43.5 hours.

  8. Summarisation of weighted networks

    NASA Astrophysics Data System (ADS)

    Zhou, Fang; Qu, Qiang; Toivonen, Hannu

    2017-09-01

    Networks often contain implicit structure. We introduce novel problems and methods that look for structure in networks, by grouping nodes into supernodes and edges to superedges, and then make this structure visible to the user in a smaller generalised network. This task of finding generalisations of nodes and edges is formulated as 'network Summarisation'. We propose models and algorithms for networks that have weights on edges, on nodes or on both, and study three new variants of the network summarisation problem. In edge-based weighted network summarisation, the summarised network should preserve edge weights as well as possible. A wider class of settings is considered in path-based weighted network summarisation, where the resulting summarised network should preserve longer range connectivities between nodes. Node-based weighted network summarisation in turn allows weights also on nodes and summarisation aims to preserve more information related to high weight nodes. We study theoretical properties of these problems and show them to be NP-hard. We propose a range of heuristic generalisation algorithms with different trade-offs between complexity and quality of the result. Comprehensive experiments on real data show that weighted networks can be summarised efficiently with relatively little error.

  9. Ultrafast adiabatic quantum algorithm for the NP-complete exact cover problem

    PubMed Central

    Wang, Hefeng; Wu, Lian-Ao

    2016-01-01

    An adiabatic quantum algorithm may lose quantumness such as quantum coherence entirely in its long runtime, and consequently the expected quantum speedup of the algorithm does not show up. Here we present a general ultrafast adiabatic quantum algorithm. We show that by applying a sequence of fast random or regular signals during evolution, the runtime can be reduced substantially, whereas advantages of the adiabatic algorithm remain intact. We also propose a randomized Trotter formula and show that the driving Hamiltonian and the proposed sequence of fast signals can be implemented simultaneously. We illustrate the algorithm by solving the NP-complete 3-bit exact cover problem (EC3), where NP stands for nondeterministic polynomial time, and put forward an approach to implementing the problem with trapped ions. PMID:26923834

  10. The mechanisms of temporal inference

    NASA Technical Reports Server (NTRS)

    Fox, B. R.; Green, S. R.

    1987-01-01

    The properties of a temporal language are determined by its constituent elements: the temporal objects which it can represent, the attributes of those objects, the relationships between them, the axioms which define the default relationships, and the rules which define the statements that can be formulated. The methods of inference which can be applied to a temporal language are derived in part from a small number of axioms which define the meaning of equality and order and how those relationships can be propagated. More complex inferences involve detailed analysis of the stated relationships. Perhaps the most challenging area of temporal inference is reasoning over disjunctive temporal constraints. Simple forms of disjunction do not sufficiently increase the expressive power of a language while unrestricted use of disjunction makes the analysis NP-hard. In many cases a set of disjunctive constraints can be converted to disjunctive normal form and familiar methods of inference can be applied to the conjunctive sub-expressions. This process itself is NP-hard but it is made more tractable by careful expansion of a tree-structured search space.

  11. An integer programming formulation of the parsimonious loss of heterozygosity problem.

    PubMed

    Catanzaro, Daniele; Labbé, Martine; Halldórsson, Bjarni V

    2013-01-01

    A loss of heterozygosity (LOH) event occurs when, by the laws of Mendelian inheritance, an individual should be heterozygote at a given site but, due to a deletion polymorphism, is not. Deletions play an important role in human disease and their detection could provide fundamental insights for the development of new diagnostics and treatments. In this paper, we investigate the parsimonious loss of heterozygosity problem (PLOHP), i.e., the problem of partitioning suspected polymorphisms from a set of individuals into a minimum number of deletion areas. Specifically, we generalize Halldórsson et al.'s work by providing a more general formulation of the PLOHP and by showing how one can incorporate different recombination rates and prior knowledge about the locations of deletions. Moreover, we show that the PLOHP can be formulated as a specific version of the clique partition problem in a particular class of graphs called undirected catch-point interval graphs and we prove its general $({\\cal NP})$-hardness. Finally, we provide a state-of-the-art integer programming (IP) formulation and strengthening valid inequalities to exactly solve real instances of the PLOHP containing up to 9,000 individuals and 3,000 SNPs. Our results give perspectives on the mathematics of the PLOHP and suggest new directions on the development of future efficient exact solution approaches.

  12. Performance Analysis of Evolutionary Algorithms for Steiner Tree Problems.

    PubMed

    Lai, Xinsheng; Zhou, Yuren; Xia, Xiaoyun; Zhang, Qingfu

    2017-01-01

    The Steiner tree problem (STP) aims to determine some Steiner nodes such that the minimum spanning tree over these Steiner nodes and a given set of special nodes has the minimum weight, which is NP-hard. STP includes several important cases. The Steiner tree problem in graphs (GSTP) is one of them. Many heuristics have been proposed for STP, and some of them have proved to be performance guarantee approximation algorithms for this problem. Since evolutionary algorithms (EAs) are general and popular randomized heuristics, it is significant to investigate the performance of EAs for STP. Several empirical investigations have shown that EAs are efficient for STP. However, up to now, there is no theoretical work on the performance of EAs for STP. In this article, we reveal that the (1+1) EA achieves 3/2-approximation ratio for STP in a special class of quasi-bipartite graphs in expected runtime [Formula: see text], where [Formula: see text], [Formula: see text], and [Formula: see text] are, respectively, the number of Steiner nodes, the number of special nodes, and the largest weight among all edges in the input graph. We also show that the (1+1) EA is better than two other heuristics on two GSTP instances, and the (1+1) EA may be inefficient on a constructed GSTP instance.

  13. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    PubMed

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2018-04-01

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  14. Process-driven inference of biological network structure: feasibility, minimality, and multiplicity

    NASA Astrophysics Data System (ADS)

    Zeng, Chen

    2012-02-01

    For a given dynamic process, identifying the putative interaction networks to achieve it is the inference problem. In this talk, we address the computational complexity of inference problem in the context of Boolean networks under dominant inhibition condition. The first is a proof that the feasibility problem (is there a network that explains the dynamics?) can be solved in polynomial-time. Second, while the minimality problem (what is the smallest network that explains the dynamics?) is shown to be NP-hard, a simple polynomial-time heuristic is shown to produce near-minimal solutions, as demonstrated by simulation. Third, the theoretical framework also leads to a fast polynomial-time heuristic to estimate the number of network solutions with reasonable accuracy. We will apply these approaches to two simplified Boolean network models for the cell cycle process of budding yeast (Li 2004) and fission yeast (Davidich 2008). Our results demonstrate that each of these networks contains a giant backbone motif spanning all the network nodes that provides the desired main functionality, while the remaining edges in the network form smaller motifs whose role is to confer stability properties rather than provide function. Moreover, we show that the bioprocesses of these two cell cycle models differ considerably from a typically generated process and are intrinsically cascade-like.

  15. BiCluE - Exact and heuristic algorithms for weighted bi-cluster editing of biomedical data

    PubMed Central

    2013-01-01

    Background The explosion of biological data has dramatically reformed today's biology research. The biggest challenge to biologists and bioinformaticians is the integration and analysis of large quantity of data to provide meaningful insights. One major problem is the combined analysis of data from different types. Bi-cluster editing, as a special case of clustering, which partitions two different types of data simultaneously, might be used for several biomedical scenarios. However, the underlying algorithmic problem is NP-hard. Results Here we contribute with BiCluE, a software package designed to solve the weighted bi-cluster editing problem. It implements (1) an exact algorithm based on fixed-parameter tractability and (2) a polynomial-time greedy heuristics based on solving the hardest part, edge deletions, first. We evaluated its performance on artificial graphs. Afterwards we exemplarily applied our implementation on real world biomedical data, GWAS data in this case. BiCluE generally works on any kind of data types that can be modeled as (weighted or unweighted) bipartite graphs. Conclusions To our knowledge, this is the first software package solving the weighted bi-cluster editing problem. BiCluE as well as the supplementary results are available online at http://biclue.mpi-inf.mpg.de. PMID:24565035

  16. BCDP: Budget Constrained and Delay-Bounded Placement for Hybrid Roadside Units in Vehicular Ad Hoc Networks

    PubMed Central

    Li, Peng; Huang, Chuanhe; Liu, Qin

    2014-01-01

    In vehicular ad hoc networks, roadside units (RSUs) placement has been proposed to improve the the overall network performance in many ITS applications. This paper addresses the budget constrained and delay-bounded placement problem (BCDP) for roadside units in vehicular ad hoc networks. There are two types of RSUs: cable connected RSU (c-RSU) and wireless RSU (w-RSU). c-RSUs are interconnected through wired lines, and they form the backbone of VANETs, while w-RSUs connect to other RSUs through wireless communication and serve as an economical extension of the coverage of c-RSUs. The delay-bounded coverage range and deployment cost of these two cases are totally different. We are given a budget constraint and a delay bound, the problem is how to find the optimal candidate sites with the maximal delay-bounded coverage to place RSUs such that a message from any c-RSU in the region can be disseminated to the more vehicles within the given budget constraint and delay bound. We first prove that the BCDP problem is NP-hard. Then we propose several algorithms to solve the BCDP problem. Simulation results show the heuristic algorithms can significantly improve the coverage range and reduce the total deployment cost, compared with other heuristic methods. PMID:25436656

  17. Solving the Secondary Structure Matching Problem in Cryo-EM De Novo Modeling Using a Constrained K-Shortest Path Graph Algorithm.

    PubMed

    Al Nasr, Kamal; Ranjan, Desh; Zubair, Mohammad; Chen, Lin; He, Jing

    2014-01-01

    Electron cryomicroscopy is becoming a major experimental technique in solving the structures of large molecular assemblies. More and more three-dimensional images have been obtained at the medium resolutions between 5 and 10 Å. At this resolution range, major α-helices can be detected as cylindrical sticks and β-sheets can be detected as plain-like regions. A critical question in de novo modeling from cryo-EM images is to determine the match between the detected secondary structures from the image and those on the protein sequence. We formulate this matching problem into a constrained graph problem and present an O(Δ(2)N(2)2(N)) algorithm to this NP-Hard problem. The algorithm incorporates the dynamic programming approach into a constrained K-shortest path algorithm. Our method, DP-TOSS, has been tested using α-proteins with maximum 33 helices and α-β proteins up to five helices and 12 β-strands. The correct match was ranked within the top 35 for 19 of the 20 α-proteins and all nine α-β proteins tested. The results demonstrate that DP-TOSS improves accuracy, time and memory space in deriving the topologies of the secondary structure elements for proteins with a large number of secondary structures and a complex skeleton.

  18. A mixed analog/digital chaotic neuro-computer system for quadratic assignment problems.

    PubMed

    Horio, Yoshihiko; Ikeguchi, Tohru; Aihara, Kazuyuki

    2005-01-01

    We construct a mixed analog/digital chaotic neuro-computer prototype system for quadratic assignment problems (QAPs). The QAP is one of the difficult NP-hard problems, and includes several real-world applications. Chaotic neural networks have been used to solve combinatorial optimization problems through chaotic search dynamics, which efficiently searches optimal or near optimal solutions. However, preliminary experiments have shown that, although it obtained good feasible solutions, the Hopfield-type chaotic neuro-computer hardware system could not obtain the optimal solution of the QAP. Therefore, in the present study, we improve the system performance by adopting a solution construction method, which constructs a feasible solution using the analog internal state values of the chaotic neurons at each iteration. In order to include the construction method into our hardware, we install a multi-channel analog-to-digital conversion system to observe the internal states of the chaotic neurons. We show experimentally that a great improvement in the system performance over the original Hopfield-type chaotic neuro-computer is obtained. That is, we obtain the optimal solution for the size-10 QAP in less than 1000 iterations. In addition, we propose a guideline for parameter tuning of the chaotic neuro-computer system according to the observation of the internal states of several chaotic neurons in the network.

  19. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons

    PubMed Central

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply—unlike processors in our current generation of computer hardware—an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling. PMID:27065785

  20. An application of traveling salesman problem using the improved genetic algorithm on android google maps

    NASA Astrophysics Data System (ADS)

    Narwadi, Teguh; Subiyanto

    2017-03-01

    The Travelling Salesman Problem (TSP) is one of the best known NP-hard problems, which means that no exact algorithm to solve it in polynomial time. This paper present a new variant application genetic algorithm approach with a local search technique has been developed to solve the TSP. For the local search technique, an iterative hill climbing method has been used. The system is implemented on the Android OS because android is now widely used around the world and it is mobile system. It is also integrated with Google API that can to get the geographical location and the distance of the cities, and displays the route. Therefore, we do some experimentation to test the behavior of the application. To test the effectiveness of the application of hybrid genetic algorithm (HGA) is compare with the application of simple GA in 5 sample from the cities in Central Java, Indonesia with different numbers of cities. According to the experiment results obtained that in the average solution HGA shows in 5 tests out of 5 (100%) is better than simple GA. The results have shown that the hybrid genetic algorithm outperforms the genetic algorithm especially in the case with the problem higher complexity.

  1. Dysregulation of major functional genes in frontal cortex by maternal exposure to carbon black nanoparticle is not ameliorated by ascorbic acid pretreatment.

    PubMed

    Onoda, Atsuto; Takeda, Ken; Umezawa, Masakazu

    2018-09-01

    Recent cohort studies have revealed that perinatal exposure to particulate air pollution, including carbon-based nanoparticles, increases the risk of brain disorders. Although developmental neurotoxicity is currently a major issue in the toxicology of nanoparticles, critical information for understanding the mechanisms underlying the developmental neurotoxicity of airway exposure to carbon black nanoparticle (CB-NP) is still lacking. In order to investigate these mechanisms, we comprehensively analyzed fluctuations in the gene expression profile of the frontal cortex of offspring mice exposed maternally to CB-NP, using microarray analysis combined with Gene Ontology information. We also analyzed differences in the enriched function of genes dysregulated by maternal CB-NP exposure with and without ascorbic acid pretreatment to refine specific alterations in gene expression induced by CB-NP. Total of 652 and 775 genes were dysregulated by CB-NP in the frontal cortex of 6- and 12-week-old offspring mice, respectively. Among the genes dysregulated by CB-NP, those related to extracellular matrix structural constituent, cellular response to interferon-beta, muscle organ development, and cysteine-type endopeptidase inhibitor activity were ameliorated by ascorbic acid pretreatment. A large proportion of the dysregulated genes, categorized in hemostasis, growth factor, chemotaxis, cell proliferation, blood vessel, and dopaminergic neurotransmission, were, however, not ameliorated by ascorbic acid pretreatment. The lack of effects of ascorbic acid on the dysregulation of genes following maternal CB-NP exposure suggests that the contribution of oxidative stress to the effects of CB-NP on these biological functions, i.e., cell migration and proliferation, blood vessel maintenance, and dopaminergic neuron system, may be limited. At least, ascorbic acid pretreatment is hardly likely to be able to protect the brain of offspring from developmental neurotoxicity of CB-NP. The present study provides insight into the mechanisms underlying developmental neurotoxicity following maternal nanoparticle exposure. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Phylo: A Citizen Science Approach for Improving Multiple Sequence Alignment

    PubMed Central

    Kam, Alfred; Kwak, Daniel; Leung, Clarence; Wu, Chu; Zarour, Eleyine; Sarmenta, Luis; Blanchette, Mathieu; Waldispühl, Jérôme

    2012-01-01

    Background Comparative genomics, or the study of the relationships of genome structure and function across different species, offers a powerful tool for studying evolution, annotating genomes, and understanding the causes of various genetic disorders. However, aligning multiple sequences of DNA, an essential intermediate step for most types of analyses, is a difficult computational task. In parallel, citizen science, an approach that takes advantage of the fact that the human brain is exquisitely tuned to solving specific types of problems, is becoming increasingly popular. There, instances of hard computational problems are dispatched to a crowd of non-expert human game players and solutions are sent back to a central server. Methodology/Principal Findings We introduce Phylo, a human-based computing framework applying “crowd sourcing” techniques to solve the Multiple Sequence Alignment (MSA) problem. The key idea of Phylo is to convert the MSA problem into a casual game that can be played by ordinary web users with a minimal prior knowledge of the biological context. We applied this strategy to improve the alignment of the promoters of disease-related genes from up to 44 vertebrate species. Since the launch in November 2010, we received more than 350,000 solutions submitted from more than 12,000 registered users. Our results show that solutions submitted contributed to improving the accuracy of up to 70% of the alignment blocks considered. Conclusions/Significance We demonstrate that, combined with classical algorithms, crowd computing techniques can be successfully used to help improving the accuracy of MSA. More importantly, we show that an NP-hard computational problem can be embedded in casual game that can be easily played by people without significant scientific training. This suggests that citizen science approaches can be used to exploit the billions of “human-brain peta-flops” of computation that are spent every day playing games. Phylo is available at: http://phylo.cs.mcgill.ca. PMID:22412834

  3. Workflows in bioinformatics: meta-analysis and prototype implementation of a workflow generator.

    PubMed

    Garcia Castro, Alexander; Thoraval, Samuel; Garcia, Leyla J; Ragan, Mark A

    2005-04-07

    Computational methods for problem solving need to interleave information access and algorithm execution in a problem-specific workflow. The structures of these workflows are defined by a scaffold of syntactic, semantic and algebraic objects capable of representing them. Despite the proliferation of GUIs (Graphic User Interfaces) in bioinformatics, only some of them provide workflow capabilities; surprisingly, no meta-analysis of workflow operators and components in bioinformatics has been reported. We present a set of syntactic components and algebraic operators capable of representing analytical workflows in bioinformatics. Iteration, recursion, the use of conditional statements, and management of suspend/resume tasks have traditionally been implemented on an ad hoc basis and hard-coded; by having these operators properly defined it is possible to use and parameterize them as generic re-usable components. To illustrate how these operations can be orchestrated, we present GPIPE, a prototype graphic pipeline generator for PISE that allows the definition of a pipeline, parameterization of its component methods, and storage of metadata in XML formats. This implementation goes beyond the macro capacities currently in PISE. As the entire analysis protocol is defined in XML, a complete bioinformatic experiment (linked sets of methods, parameters and results) can be reproduced or shared among users. http://if-web1.imb.uq.edu.au/Pise/5.a/gpipe.html (interactive), ftp://ftp.pasteur.fr/pub/GenSoft/unix/misc/Pise/ (download). From our meta-analysis we have identified syntactic structures and algebraic operators common to many workflows in bioinformatics. The workflow components and algebraic operators can be assimilated into re-usable software components. GPIPE, a prototype implementation of this framework, provides a GUI builder to facilitate the generation of workflows and integration of heterogeneous analytical tools.

  4. Parallel evolutionary computation in bioinformatics applications.

    PubMed

    Pinho, Jorge; Sobral, João Luis; Rocha, Miguel

    2013-05-01

    A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Mining Top K Spread Sources for a Specific Topic and a Given Node.

    PubMed

    Liu, Weiwei; Deng, Zhi-Hong; Cao, Longbing; Xu, Xiaoran; Liu, He; Gong, Xiuwen

    2015-11-01

    In social networks, nodes (or users) interested in specific topics are often influenced by others. The influence is usually associated with a set of nodes rather than a single one. An interesting but challenging task for any given topic and node is to find the set of nodes that represents the source or trigger for the topic and thus identify those nodes that have the greatest influence on the given node as the topic spreads. We find that it is an NP-hard problem. This paper proposes an effective framework to deal with this problem. First, the topic propagation is represented as the Bayesian network. We then construct the propagation model by a variant of the voter model. The probability transition matrix (PTM) algorithm is presented to conduct the probability inference with the complexity O(θ(3)log2θ), while θ is the number nodes in the given graph. To evaluate the PTM algorithm, we conduct extensive experiments on real datasets. The experimental results show that the PTM algorithm is both effective and efficient.

  6. Partial branch and bound algorithm for improved data association in multiframe processing

    NASA Astrophysics Data System (ADS)

    Poore, Aubrey B.; Yan, Xin

    1999-07-01

    A central problem in multitarget, multisensor, and multiplatform tracking remains that of data association. Lagrangian relaxation methods have shown themselves to yield near optimal answers in real-time. The necessary improvement in the quality of these solutions warrants a continuing interest in these methods. These problems are NP-hard; the only known methods for solving them optimally are enumerative in nature with branch-and-bound being most efficient. Thus, the development of methods less than a full branch-and-bound are needed for improving the quality. Such methods as K-best, local search, and randomized search have been proposed to improve the quality of the relaxation solution. Here, a partial branch-and-bound technique along with adequate branching and ordering rules are developed. Lagrangian relaxation is used as a branching method and as a method to calculate the lower bound for subproblems. The result shows that the branch-and-bound framework greatly improves the resolution quality of the Lagrangian relaxation algorithm and yields better multiple solutions in less time than relaxation alone.

  7. XML Reconstruction View Selection in XML Databases: Complexity Analysis and Approximation Scheme

    NASA Astrophysics Data System (ADS)

    Chebotko, Artem; Fu, Bin

    Query evaluation in an XML database requires reconstructing XML subtrees rooted at nodes found by an XML query. Since XML subtree reconstruction can be expensive, one approach to improve query response time is to use reconstruction views - materialized XML subtrees of an XML document, whose nodes are frequently accessed by XML queries. For this approach to be efficient, the principal requirement is a framework for view selection. In this work, we are the first to formalize and study the problem of XML reconstruction view selection. The input is a tree T, in which every node i has a size c i and profit p i , and the size limitation C. The target is to find a subset of subtrees rooted at nodes i 1, ⋯ , i k respectively such that c_{i_1}+\\cdots +c_{i_k}le C, and p_{i_1}+\\cdots +p_{i_k} is maximal. Furthermore, there is no overlap between any two subtrees selected in the solution. We prove that this problem is NP-hard and present a fully polynomial-time approximation scheme (FPTAS) as a solution.

  8. Advance Resource Provisioning in Bulk Data Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balman, Mehmet

    2012-10-01

    Today?s scientific and business applications generate mas- sive data sets that need to be transferred to remote sites for sharing, processing, and long term storage. Because of increasing data volumes and enhancement in current net- work technology that provide on-demand high-speed data access between collaborating institutions, data handling and scheduling problems have reached a new scale. In this paper, we present a new data scheduling model with ad- vance resource provisioning, in which data movement operations are defined with earliest start and latest comple- tion times. We analyze time-dependent resource assign- ment problem, and propose a new methodology to improvemore » the current systems by allowing researchers and higher-level meta-schedulers to use data-placement as-a-service, so they can plan ahead and submit transfer requests in advance. In general, scheduling with time and resource conflicts is NP-hard. We introduce an efficient algorithm to organize multiple requests on the fly, while satisfying users? time and resource constraints. We successfully tested our algorithm in a simple benchmark simulator that we have developed, and demonstrated its performance with initial test results.« less

  9. A parsimonious tree-grow method for haplotype inference.

    PubMed

    Li, Zhenping; Zhou, Wenfeng; Zhang, Xiang-Sun; Chen, Luonan

    2005-09-01

    Haplotype information has become increasingly important in analyzing fine-scale molecular genetics data, such as disease genes mapping and drug design. Parsimony haplotyping is one of haplotyping problems belonging to NP-hard class. In this paper, we aim to develop a novel algorithm for the haplotype inference problem with the parsimony criterion, based on a parsimonious tree-grow method (PTG). PTG is a heuristic algorithm that can find the minimum number of distinct haplotypes based on the criterion of keeping all genotypes resolved during tree-grow process. In addition, a block-partitioning method is also proposed to improve the computational efficiency. We show that the proposed approach is not only effective with a high accuracy, but also very efficient with the computational complexity in the order of O(m2n) time for n single nucleotide polymorphism sites in m individual genotypes. The software is available upon request from the authors, or from http://zhangroup.aporc.org/bioinfo/ptg/ chen@elec.osaka-sandai.ac.jp Supporting materials is available from http://zhangroup.aporc.org/bioinfo/ptg/bti572supplementary.pdf

  10. A Case Study of Controlling Crossover in a Selection Hyper-heuristic Framework Using the Multidimensional Knapsack Problem.

    PubMed

    Drake, John H; Özcan, Ender; Burke, Edmund K

    2016-01-01

    Hyper-heuristics are high-level methodologies for solving complex problems that operate on a search space of heuristics. In a selection hyper-heuristic framework, a heuristic is chosen from an existing set of low-level heuristics and applied to the current solution to produce a new solution at each point in the search. The use of crossover low-level heuristics is possible in an increasing number of general-purpose hyper-heuristic tools such as HyFlex and Hyperion. However, little work has been undertaken to assess how best to utilise it. Since a single-point search hyper-heuristic operates on a single candidate solution, and two candidate solutions are required for crossover, a mechanism is required to control the choice of the other solution. The frameworks we propose maintain a list of potential solutions for use in crossover. We investigate the use of such lists at two conceptual levels. First, crossover is controlled at the hyper-heuristic level where no problem-specific information is required. Second, it is controlled at the problem domain level where problem-specific information is used to produce good-quality solutions to use in crossover. A number of selection hyper-heuristics are compared using these frameworks over three benchmark libraries with varying properties for an NP-hard optimisation problem: the multidimensional 0-1 knapsack problem. It is shown that allowing crossover to be managed at the domain level outperforms managing crossover at the hyper-heuristic level in this problem domain.

  11. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    NASA Astrophysics Data System (ADS)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  12. Complexity of the Quantum Adiabatic Algorithm

    NASA Astrophysics Data System (ADS)

    Hen, Itay

    2013-03-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorihms. Here, we discuss several aspects of the quantum adiabatic algorithm: We analyze the efficiency of the algorithm on several ``hard'' (NP) computational problems. Studying the size dependence of the typical minimum energy gap of the Hamiltonians of these problems using quantum Monte Carlo methods, we find that while for most problems the minimum gap decreases exponentially with the size of the problem, indicating that the QAA is not more efficient than existing classical search algorithms, for other problems there is evidence to suggest that the gap may be polynomial near the phase transition. We also discuss applications of the QAA to ``real life'' problems and how they can be implemented on currently available (albeit prototypical) quantum hardware such as ``D-Wave One'', that impose serious restrictions as to which type of problems may be tested. Finally, we discuss different approaches to find improved implementations of the algorithm such as local adiabatic evolution, adaptive methods, local search in Hamiltonian space and others.

  13. A volumetric conformal mapping approach for clustering white matter fibers in the brain

    PubMed Central

    Gupta, Vikash; Prasad, Gautam; Thompson, Paul

    2017-01-01

    The human brain may be considered as a genus-0 shape, topologically equivalent to a sphere. Various methods have been used in the past to transform the brain surface to that of a sphere using harmonic energy minimization methods used for cortical surface matching. However, very few methods have studied volumetric parameterization of the brain using a spherical embedding. Volumetric parameterization is typically used for complicated geometric problems like shape matching, morphing and isogeometric analysis. Using conformal mapping techniques, we can establish a bijective mapping between the brain and the topologically equivalent sphere. Our hypothesis is that shape analysis problems are simplified when the shape is defined in an intrinsic coordinate system. Our goal is to establish such a coordinate system for the brain. The efficacy of the method is demonstrated with a white matter clustering problem. Initial results show promise for future investigation in these parameterization technique and its application to other problems related to computational anatomy like registration and segmentation. PMID:29177252

  14. On the Soil Roughness Parameterization Problem in Soil Moisture Retrieval of Bare Surfaces from Synthetic Aperture Radar

    PubMed Central

    Verhoest, Niko E.C; Lievens, Hans; Wagner, Wolfgang; Álvarez-Mozos, Jesús; Moran, M. Susan; Mattia, Francesco

    2008-01-01

    Synthetic Aperture Radar has shown its large potential for retrieving soil moisture maps at regional scales. However, since the backscattered signal is determined by several surface characteristics, the retrieval of soil moisture is an ill-posed problem when using single configuration imagery. Unless accurate surface roughness parameter values are available, retrieving soil moisture from radar backscatter usually provides inaccurate estimates. The characterization of soil roughness is not fully understood, and a large range of roughness parameter values can be obtained for the same surface when different measurement methodologies are used. In this paper, a literature review is made that summarizes the problems encountered when parameterizing soil roughness as well as the reported impact of the errors made on the retrieved soil moisture. A number of suggestions were made for resolving issues in roughness parameterization and studying the impact of these roughness problems on the soil moisture retrieval accuracy and scale. PMID:27879932

  15. History of the Army Ground Forces. Study Number 13. Activation and Early Training of ’D’ Division

    DTIC Science & Technology

    1948-06-30

    CONTENTS,. Page Prefatory Note I - Preactivation Period, 1 `-Act ivation. 10 >Basic Training. 11 The NP Test, 17 ’ýUnit Training, 24 K:,Platoon...of the hard days ahead. "A leader must be fair; he must be interested in his men; above all, he must kaow his job. No men wants to follow a stumbler or...said, "Be sure you donIt shoot me, Joe." Lieutenant 0’Keefe did not have the. best control of his squads as they advanced but he worked hard , dashing

  16. Reducing vertices in property graphs

    PubMed Central

    Pąk, Karol

    2018-01-01

    Graph databases are constantly growing, and, at the same time, some of their data is the same or similar. Our experience with the management of the existing databases, especially the bigger ones, shows that certain vertices are particularly replicated there numerous times. Eliminating repetitive or even very similar data speeds up the access to database resources. We present a modification of this approach, where similarly we group together vertices of identical properties, but then additionally we join together groups of data that are located in distant parts of a graph. The second part of our approach is non-trivial. We show that the search for a partition of a given graph where each member of the partition has only pairwise distant vertices is NP-hard. We indicate a group of heuristics that try to solve our difficult computational problems and then we apply them to check the the effectiveness of our approach. PMID:29444127

  17. ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms

    NASA Astrophysics Data System (ADS)

    Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François

    2015-10-01

    Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.

  18. Robust Learning of High-dimensional Biological Networks with Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Nägele, Andreas; Dejori, Mathäus; Stetter, Martin

    Structure learning of Bayesian networks applied to gene expression data has become a potentially useful method to estimate interactions between genes. However, the NP-hardness of Bayesian network structure learning renders the reconstruction of the full genetic network with thousands of genes unfeasible. Consequently, the maximal network size is usually restricted dramatically to a small set of genes (corresponding with variables in the Bayesian network). Although this feature reduction step makes structure learning computationally tractable, on the downside, the learned structure might be adversely affected due to the introduction of missing genes. Additionally, gene expression data are usually very sparse with respect to the number of samples, i.e., the number of genes is much greater than the number of different observations. Given these problems, learning robust network features from microarray data is a challenging task. This chapter presents several approaches tackling the robustness issue in order to obtain a more reliable estimation of learned network features.

  19. Sparse Coding and Counting for Robust Visual Tracking

    PubMed Central

    Liu, Risheng; Wang, Jing; Shang, Xiaoke; Wang, Yiyang; Su, Zhixun; Cai, Yu

    2016-01-01

    In this paper, we propose a novel sparse coding and counting method under Bayesian framework for visual tracking. In contrast to existing methods, the proposed method employs the combination of L0 and L1 norm to regularize the linear coefficients of incrementally updated linear basis. The sparsity constraint enables the tracker to effectively handle difficult challenges, such as occlusion or image corruption. To achieve real-time processing, we propose a fast and efficient numerical algorithm for solving the proposed model. Although it is an NP-hard problem, the proposed accelerated proximal gradient (APG) approach is guaranteed to converge to a solution quickly. Besides, we provide a closed solution of combining L0 and L1 regularized representation to obtain better sparsity. Experimental results on challenging video sequences demonstrate that the proposed method achieves state-of-the-art results both in accuracy and speed. PMID:27992474

  20. A faster 1.375-approximation algorithm for sorting by transpositions.

    PubMed

    Cunha, Luís Felipe I; Kowada, Luis Antonio B; Hausen, Rodrigo de A; de Figueiredo, Celina M H

    2015-11-01

    Sorting by Transpositions is an NP-hard problem for which several polynomial-time approximation algorithms have been developed. Hartman and Shamir (2006) developed a 1.5-approximation [Formula: see text] algorithm, whose running time was improved to O(nlogn) by Feng and Zhu (2007) with a data structure they defined, the permutation tree. Elias and Hartman (2006) developed a 1.375-approximation O(n(2)) algorithm, and Firoz et al. (2011) claimed an improvement to the running time, from O(n(2)) to O(nlogn), by using the permutation tree. We provide counter-examples to the correctness of Firoz et al.'s strategy, showing that it is not possible to reach a component by sufficient extensions using the method proposed by them. In addition, we propose a 1.375-approximation algorithm, modifying Elias and Hartman's approach with the use of permutation trees and achieving O(nlogn) time.

  1. Constant Communities in Complex Networks

    NASA Astrophysics Data System (ADS)

    Chakraborty, Tanmoy; Srinivasan, Sriram; Ganguly, Niloy; Bhowmick, Sanjukta; Mukherjee, Animesh

    2013-05-01

    Identifying community structure is a fundamental problem in network analysis. Most community detection algorithms are based on optimizing a combinatorial parameter, for example modularity. This optimization is generally NP-hard, thus merely changing the vertex order can alter their assignments to the community. However, there has been less study on how vertex ordering influences the results of the community detection algorithms. Here we identify and study the properties of invariant groups of vertices (constant communities) whose assignment to communities are, quite remarkably, not affected by vertex ordering. The percentage of constant communities can vary across different applications and based on empirical results we propose metrics to evaluate these communities. Using constant communities as a pre-processing step, one can significantly reduce the variation of the results. Finally, we present a case study on phoneme network and illustrate that constant communities, quite strikingly, form the core functional units of the larger communities.

  2. Using MOEA with Redistribution and Consensus Branches to Infer Phylogenies.

    PubMed

    Min, Xiaoping; Zhang, Mouzhao; Yuan, Sisi; Ge, Shengxiang; Liu, Xiangrong; Zeng, Xiangxiang; Xia, Ningshao

    2017-12-26

    In recent years, to infer phylogenies, which are NP-hard problems, more and more research has focused on using metaheuristics. Maximum Parsimony and Maximum Likelihood are two effective ways to conduct inference. Based on these methods, which can also be considered as the optimal criteria for phylogenies, various kinds of multi-objective metaheuristics have been used to reconstruct phylogenies. However, combining these two time-consuming methods results in those multi-objective metaheuristics being slower than a single objective. Therefore, we propose a novel, multi-objective optimization algorithm, MOEA-RC, to accelerate the processes of rebuilding phylogenies using structural information of elites in current populations. We compare MOEA-RC with two representative multi-objective algorithms, MOEA/D and NAGA-II, and a non-consensus version of MOEA-RC on three real-world datasets. The result is, within a given number of iterations, MOEA-RC achieves better solutions than the other algorithms.

  3. Constrained Low-Interference Relay Node Deployment for Underwater Acoustic Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Wenping

    An Underwater Acoustic Wireless Sensor Network (UA-WSN) consists of many resource-constrained Underwater Sensor Nodes (USNs), which are deployed to perform collaborative monitoring tasks over a given region. One way to preserve network connectivity while guaranteing other network QoS is to deploy some Relay Nodes (RNs) in the networks, in which RNs' function is more powerful than USNs and their cost is more expensive. This paper addresses Constrained Low-interference Relay Node Deployment (C-LRND) problem for 3-D UA-WSNs in which the RNs are placed at a subset of candidate locations to ensure connectivity between the USNs, under both the number of RNs deployed and the value of total incremental interference constraints. We first prove that it is NP-hard, then present a general approximation algorithm framework and get two polynomial time O(1)-approximation algorithms.

  4. A novel non-uniform control vector parameterization approach with time grid refinement for flight level tracking optimal control problems.

    PubMed

    Liu, Ping; Li, Guodong; Liu, Xinggao; Xiao, Long; Wang, Yalin; Yang, Chunhua; Gui, Weihua

    2018-02-01

    High quality control method is essential for the implementation of aircraft autopilot system. An optimal control problem model considering the safe aerodynamic envelop is therefore established to improve the control quality of aircraft flight level tracking. A novel non-uniform control vector parameterization (CVP) method with time grid refinement is then proposed for solving the optimal control problem. By introducing the Hilbert-Huang transform (HHT) analysis, an efficient time grid refinement approach is presented and an adaptive time grid is automatically obtained. With this refinement, the proposed method needs fewer optimization parameters to achieve better control quality when compared with uniform refinement CVP method, whereas the computational cost is lower. Two well-known flight level altitude tracking problems and one minimum time cost problem are tested as illustrations and the uniform refinement control vector parameterization method is adopted as the comparative base. Numerical results show that the proposed method achieves better performances in terms of optimization accuracy and computation cost; meanwhile, the control quality is efficiently improved. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Associating optical measurements and estimating orbits of geocentric objects with a Genetic Algorithm: performance limitations.

    NASA Astrophysics Data System (ADS)

    Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent

    2016-07-01

    Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.

  6. Parameterized Cross Sections for Pion Production in Proton-Proton Collisions

    NASA Technical Reports Server (NTRS)

    Blattnig, Steve R.; Swaminathan, Sudha R.; Kruger, Adam T.; Ngom, Moussa; Norbury, John W.; Tripathi, R. K.

    2000-01-01

    An accurate knowledge of cross sections for pion production in proton-proton collisions finds wide application in particle physics, astrophysics, cosmic ray physics, and space radiation problems, especially in situations where an incident proton is transported through some medium and knowledge of the output particle spectrum is required when given the input spectrum. In these cases, accurate parameterizations of the cross sections are desired. In this paper much of the experimental data are reviewed and compared with a wide variety of different cross section parameterizations. Therefore, parameterizations of neutral and charged pion cross sections are provided that give a very accurate description of the experimental data. Lorentz invariant differential cross sections, spectral distributions, and total cross section parameterizations are presented.

  7. Collaborative Research: Reducing tropical precipitation biases in CESM — Tests of unified parameterizations with ARM observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larson, Vincent; Gettelman, Andrew; Morrison, Hugh

    In state-of-the-art climate models, each cloud type is treated using its own separate cloud parameterization and its own separate microphysics parameterization. This use of separate schemes for separate cloud regimes is undesirable because it is theoretically unfounded, it hampers interpretation of results, and it leads to the temptation to overtune parameters. In this grant, we are creating a climate model that contains a unified cloud parameterization and a unified microphysics parameterization. This model will be used to address the problems of excessive frequency of drizzle in climate models and excessively early onset of deep convection in the Tropics over land.more » The resulting model will be compared with ARM observations.« less

  8. Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel

    PubMed Central

    Sakin, Sayef Azad; Alamri, Atif; Tran, Nguyen H.

    2017-01-01

    Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies. PMID:29215591

  9. New mathematical modeling for a location-routing-inventory problem in a multi-period closed-loop supply chain in a car industry

    NASA Astrophysics Data System (ADS)

    Forouzanfar, F.; Tavakkoli-Moghaddam, R.; Bashiri, M.; Baboli, A.; Hadji Molana, S. M.

    2017-11-01

    This paper studies a location-routing-inventory problem in a multi-period closed-loop supply chain with multiple suppliers, producers, distribution centers, customers, collection centers, recovery, and recycling centers. In this supply chain, centers are multiple levels, a price increase factor is considered for operational costs at centers, inventory and shortage (including lost sales and backlog) are allowed at production centers, arrival time of vehicles of each plant to its dedicated distribution centers and also departure from them are considered, in such a way that the sum of system costs and the sum of maximum time at each level should be minimized. The aforementioned problem is formulated in the form of a bi-objective nonlinear integer programming model. Due to the NP-hard nature of the problem, two meta-heuristics, namely, non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO), are used in large sizes. In addition, a Taguchi method is used to set the parameters of these algorithms to enhance their performance. To evaluate the efficiency of the proposed algorithms, the results for small-sized problems are compared with the results of the ɛ-constraint method. Finally, four measuring metrics, namely, the number of Pareto solutions, mean ideal distance, spacing metric, and quality metric, are used to compare NSGA-II and MOPSO.

  10. Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel.

    PubMed

    Sakin, Sayef Azad; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Alamri, Atif; Tran, Nguyen H; Fortino, Giancarlo

    2017-12-07

    Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies.

  11. Understanding the Kinetics of Protein-Nanoparticle Corona Formation.

    PubMed

    Vilanova, Oriol; Mittag, Judith J; Kelly, Philip M; Milani, Silvia; Dawson, Kenneth A; Rädler, Joachim O; Franzese, Giancarlo

    2016-12-27

    When a pristine nanoparticle (NP) encounters a biological fluid, biomolecules spontaneously form adsorption layers around the NP, called "protein corona". The corona composition depends on the time-dependent environmental conditions and determines the NP's fate within living organisms. Understanding how the corona evolves is fundamental in nanotoxicology as well as medical applications. However, the process of corona formation is challenging due to the large number of molecules involved and to the large span of relevant time scales ranging from 100 μs, hard to probe in experiments, to hours, out of reach of all-atoms simulations. Here we combine experiments, simulations, and theory to study (i) the corona kinetics (over 10 -3 -10 3 s) and (ii) its final composition for silica NPs in a model plasma made of three blood proteins (human serum albumin, transferrin, and fibrinogen). When computer simulations are calibrated by experimental protein-NP binding affinities measured in single-protein solutions, the theoretical model correctly reproduces competitive protein replacement as proven by independent experiments. When we change the order of administration of the three proteins, we observe a memory effect in the final corona composition that we can explain within our model. Our combined experimental and computational approach is a step toward the development of systematic prediction and control of protein-NP corona composition based on a hierarchy of equilibrium protein binding constants.

  12. Neuropsychological functioning related to specific characteristics of nocturnal enuresis.

    PubMed

    Van Herzeele, C; Dhondt, K; Roels, S P; Raes, A; Groen, L-A; Hoebeke, P; Walle, J Vande

    2015-08-01

    There is a high comorbidity demonstrated in the literature between nocturnal enuresis and several neuropsychological dysfunctions, with special emphasis on attention deficit hyperactivity disorder (ADHD). However, the majority of the psychological studies did not include full non-invasive screening and failed to differentiate between monosymptomatic nocturnal enuresis (MNE) and non-MNE patients. The present study primarily aimed to investigate the association between nocturnal enuresis and (neuro)psychological functioning in a selective homogeneous patient group, namely: children with MNE and associated nocturnal polyuria (NP). Secondly, the study investigated the association between specific characteristics of nocturnal enuresis (maximum voided volume, number of wet nights and number of nights with NP) and ADHD-inattentive symptoms, executive functioning and quality of life. The psychological measurements were multi-informant (parents, children and teachers) and multi-method (questionnaires, clinical interviews and neuropsychological testing). Thirty children aged 6-16 years (mean 10.43 years, SD 3.08) were included. Of them, 80% had at least one psychological, motor or neurological difficulty. The comorbid diagnosis of ADHD, especially the predominantly inattentive presentation, was most common. According to the teachers, a low maximum voided volume (corrected for age) was associated with more attention problems, and a high number of nights with NP was associated with more behaviour-regulation problems. No significant correlations were found between specific characteristics of enuresis and quality of life. Details are demonstrated in Table. The children were recruited from a tertiary referral centre, which resulted in selection bias. Moreover, NP was defined as a urine output exceeding 100% of the expected bladder capacity for age (EBC), and not according to the expert-opinion-based International Children's Continence Society norm of 130% of EBC. The definition for NP of a urine output exceeding 100% of the EBC is more in line with the recent findings of the Aarhus group. For children with MNE and associated NP, a high comorbidity with the predominantly inattentive presentation of ADHD was demonstrated. Children experienced problems with daytime functioning in relation to their wetting problem at night. According to the teachers, a low maximum voided volume was associated with more attention problems, and a high number of nights with NP was associated with more behaviour-regulation problems. Although comorbidity is still the appropriate word to use, the observation favours a more complex pathogenesis of enuresis with a common pathway in the central nervous system, including: neurotransmitters, influencing neuropsychological functioning as well as sleep, circadian rhythm of diuresis and bladder function control. Copyright © 2015 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  13. Active Subspaces of Airfoil Shape Parameterizations

    NASA Astrophysics Data System (ADS)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  14. P ≠NP Millenium-Problem(MP) TRIVIAL Physics Proof Via NATURAL TRUMPS Artificial-``Intelligence'' Via: Euclid Geometry, Plato Forms, Aristotle Square-of-Opposition, Menger Dimension-Theory Connections!!! NO Computational-Complexity(CC)/ANYthing!!!: Geometry!!!

    NASA Astrophysics Data System (ADS)

    Clay, London; Menger, Karl; Rota, Gian-Carlo; Euclid, Alexandria; Siegel, Edward

    P ≠NP MP proof is by computer-''science''/SEANCE(!!!)(CS) computational-''intelligence'' lingo jargonial-obfuscation(JO) NATURAL-Intelligence(NI) DISambiguation! CS P =(?) =NP MEANS (Deterministic)(PC) = (?) =(Non-D)(PC) i.e. D(P) =(?) = N(P). For inclusion(equality) vs. exclusion (inequality) irrelevant (P) simply cancels!!! (Equally any/all other CCs IF both sides identical). Crucial question left: (D) =(?) =(ND), i.e. D =(?) = N. Algorithmics[Sipser[Intro. Thy.Comp.(`97)-p.49Fig.1.15!!!

  15. Modeling the interplay between sea ice formation and the oceanic mixed layer: Limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-02-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean-sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice-ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  16. Modelling the interplay between sea ice formation and the oceanic mixed layer: limitations of simple brine rejection parameterizations

    NASA Astrophysics Data System (ADS)

    Barthélemy, Antoine; Fichefet, Thierry; Goosse, Hugues; Madec, Gurvan

    2015-04-01

    The subtle interplay between sea ice formation and ocean vertical mixing is hardly represented in current large-scale models designed for climate studies. Convective mixing caused by the brine release when ice forms is likely to prevail in leads and thin ice areas, while it occurs in models at the much larger horizontal grid cell scale. Subgrid-scale parameterizations have hence been developed to mimic the effects of small-scale convection using a vertical distribution of the salt rejected by sea ice within the mixed layer, instead of releasing it in the top ocean layer. Such a brine rejection parameterization is included in the global ocean--sea ice model NEMO-LIM3. Impacts on the simulated mixed layers and ocean temperature and salinity profiles, along with feedbacks on the sea ice cover, are then investigated in both hemispheres. The changes are overall relatively weak, except for mixed layer depths, which are in general excessively reduced compared to observation-based estimates. While potential model biases prevent a definitive attribution of this vertical mixing underestimation to the brine rejection parameterization, it is unlikely that the latter can be applied in all conditions. In that case, salt rejections do not play any role in mixed layer deepening, which is unrealistic. Applying the parameterization only for low ice--ocean relative velocities improves model results, but introduces additional parameters that are not well constrained by observations.

  17. Impacts of subgrid-scale orography parameterization on simulated atmospheric fields over Korea using a high-resolution atmospheric forecast model

    NASA Astrophysics Data System (ADS)

    Lim, Kyo-Sun Sunny; Lim, Jong-Myoung; Shin, Hyeyum Hailey; Hong, Jinkyu; Ji, Young-Yong; Lee, Wanno

    2018-06-01

    A substantial over-prediction bias at low-to-moderate wind speeds in the Weather Research and Forecasting (WRF) model has been reported in the previous studies. Low-level wind fields play an important role in dispersion of air pollutants, including radionuclides, in a high-resolution WRF framework. By implementing two subgrid-scale orography parameterizations (Jimenez and Dudhia in J Appl Meteorol Climatol 51:300-316, 2012; Mass and Ovens in WRF model physics: problems, solutions and a new paradigm for progress. Preprints, 2010 WRF Users' Workshop, NCAR, Boulder, Colo. http://www.mmm.ucar.edu/wrf/users/workshops/WS2010/presentations/session%204/4-1_WRFworkshop2010Final.pdf, 2010), we tried to compare the performance of parameterizations and to enhance the forecast skill of low-level wind fields over the central western part of South Korea. Even though both subgrid-scale orography parameterizations significantly alleviated the positive bias at 10-m wind speed, the parameterization by Jimenez and Dudhia revealed a better forecast skill in wind speed under our modeling configuration. Implementation of the subgrid-scale orography parameterizations in the model did not affect the forecast skills in other meteorological fields including 10-m wind direction. Our study also brought up the problem of discrepancy in the definition of "10-m" wind between model physics parameterizations and observations, which can cause overestimated winds in model simulations. The overestimation was larger in stable conditions than in unstable conditions, indicating that the weak diurnal cycle in the model could be attributed to the representation error.

  18. Microstructural Evaluation of Inductively Sintered Aluminum Matrix Nanocomposites Reinforced with Silicon Carbide and/or Graphene Nanoplatelets for Tribological Applications

    NASA Astrophysics Data System (ADS)

    Islam, Mohammad; Khalid, Yasir; Ahmad, Iftikhar; Almajid, Abdulhakim A.; Achour, Amine; Dunn, Theresa J.; Akram, Aftab; Anwar, Saqib

    2018-04-01

    Silicon carbide (SiC) nanoparticles (NP) and/or graphene nanoplatelets (GNP) were incorporated into the aluminum matrix through colloidal dispersion and mixing of the powders, followed by consolidation using a high-frequency induction heat sintering process. All the nanocomposite samples exhibited high densification (> 96 pct) with a maximum increase in Vickers microhardness by 92 pct relative to that of pure aluminum. The tribological properties of the samples were determined at the normal frictional forces of 10 and 50 N. At relatively low load of 10 N, the adhesive wear was found to be the predominant wear mechanism, whereas in the case of a 50 N normal load, there was significant contribution from abrasive wear possibly by hard SiC NP. From wear tests, the values for the coefficient of friction (COF) and the normalized wear rate were determined. The improvement in hardness and wear resistance may be attributed to multiple factors, including high relative density, uniform SiC and GNP dispersion in the aluminum matrix, grain refinement through GNP pinning, as well as inhibition of dislocation movement by SiC NP. The nanocomposite sample containing 10 SiC and 0.5 GNP (by wt pct) yielded the maximum wear resistance at 10 N normal load. Microstructural characterization of the nanocomposite surfaces and wear debris was performed using scanning electron microscope (SEM) and transmission electron microscope (TEM). The synergistic effect of the GNP and SiC nanostructures accounts for superior wear resistance in the aluminum matrix nanocomposites.

  19. Microstructural Evaluation of Inductively Sintered Aluminum Matrix Nanocomposites Reinforced with Silicon Carbide and/or Graphene Nanoplatelets for Tribological Applications

    NASA Astrophysics Data System (ADS)

    Islam, Mohammad; Khalid, Yasir; Ahmad, Iftikhar; Almajid, Abdulhakim A.; Achour, Amine; Dunn, Theresa J.; Akram, Aftab; Anwar, Saqib

    2018-07-01

    Silicon carbide (SiC) nanoparticles (NP) and/or graphene nanoplatelets (GNP) were incorporated into the aluminum matrix through colloidal dispersion and mixing of the powders, followed by consolidation using a high-frequency induction heat sintering process. All the nanocomposite samples exhibited high densification (> 96 pct) with a maximum increase in Vickers microhardness by 92 pct relative to that of pure aluminum. The tribological properties of the samples were determined at the normal frictional forces of 10 and 50 N. At relatively low load of 10 N, the adhesive wear was found to be the predominant wear mechanism, whereas in the case of a 50 N normal load, there was significant contribution from abrasive wear possibly by hard SiC NP. From wear tests, the values for the coefficient of friction (COF) and the normalized wear rate were determined. The improvement in hardness and wear resistance may be attributed to multiple factors, including high relative density, uniform SiC and GNP dispersion in the aluminum matrix, grain refinement through GNP pinning, as well as inhibition of dislocation movement by SiC NP. The nanocomposite sample containing 10 SiC and 0.5 GNP (by wt pct) yielded the maximum wear resistance at 10 N normal load. Microstructural characterization of the nanocomposite surfaces and wear debris was performed using scanning electron microscope (SEM) and transmission electron microscope (TEM). The synergistic effect of the GNP and SiC nanostructures accounts for superior wear resistance in the aluminum matrix nanocomposites.

  20. An Approximation Solution to Refinery Crude Oil Scheduling Problem with Demand Uncertainty Using Joint Constrained Programming

    PubMed Central

    Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun

    2014-01-01

    This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation. PMID:24757433

  1. An approximation solution to refinery crude oil scheduling problem with demand uncertainty using joint constrained programming.

    PubMed

    Duan, Qianqian; Yang, Genke; Xu, Guanglin; Pan, Changchun

    2014-01-01

    This paper is devoted to develop an approximation method for scheduling refinery crude oil operations by taking into consideration the demand uncertainty. In the stochastic model the demand uncertainty is modeled as random variables which follow a joint multivariate distribution with a specific correlation structure. Compared to deterministic models in existing works, the stochastic model can be more practical for optimizing crude oil operations. Using joint chance constraints, the demand uncertainty is treated by specifying proximity level on the satisfaction of product demands. However, the joint chance constraints usually hold strong nonlinearity and consequently, it is still hard to handle it directly. In this paper, an approximation method combines a relax-and-tight technique to approximately transform the joint chance constraints to a serial of parameterized linear constraints so that the complicated problem can be attacked iteratively. The basic idea behind this approach is to approximate, as much as possible, nonlinear constraints by a lot of easily handled linear constraints which will lead to a well balance between the problem complexity and tractability. Case studies are conducted to demonstrate the proposed methods. Results show that the operation cost can be reduced effectively compared with the case without considering the demand correlation.

  2. A Comparison between the Effects of Aerobic Dance Training on Mini-Trampoline and Hard Wooden Surface on Bone Resorption, Health-Related Physical Fitness, Balance, and Foot Plantar Pressure in Thai Working Women.

    PubMed

    Sukkeaw, Wittawat; Kritpet, Thanomwong; Bunyaratavej, Narong

    2015-09-01

    To compare the effects of aerobic dance training on mini-trampoline and hard wooden surface on bone resorption, health-related physical fitness, balance, and foot plantar pressure in Thai working women. Sixty-three volunteered females aged 35-45 years old participated in the study and were divided into 3 groups: A) aerobic dance on mini-trampoline (21 females), B) aerobic dance on hard wooden surface (21 females), and C) control group (21 females). All subjects in the aerobic dance groups wore heart rate monitors during exercise. Aerobic dance worked out 3 times a week, 40 minutes a day for 12 weeks. The intensity was set at 60-80% of the maximum heart rate. The control group engaged in routine physical activity. The collected data were bone formation (N-terminal propeptine of procollagen type I: P1NP) bone resorption (Telopeptide cross linked: β-CrossLaps) health-related physical fitness, balance, and foot plantar pressure. The obtained data from pre- and post trainings were compared and analyzed by paired samples t-test and one way analysis of covariance. The significant difference was at 0.05 level. After the 12-week training, the biochemical bone markers of both mini-trampoline and hard wooden surface aerobic dance training subjects decreased in bone resorption (β-CrossLaps) but increased in boneformation (P1NP). Health-related physical fitness, balance, and foot plantar pressure were not only better when comparing to the pre-test result but also significantly different when comparing to the control group (p < 0.05). The aerobic dance on mini-trampoline showed that leg muscular strength, balance and foot plantar pressure were significantly better than the aerobic dance on hard wooden surface (p < 0.05). The aerobic dance on mini-trampoline and hard wooden surface had positive effects on biochemical bone markers. However, the aerobic dance on mini-trampoline had more leg muscular strength and balance including less foot plantar pressure. It is considered to be an appropriate exercise programs in working women.

  3. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems

    PubMed Central

    Tuo, Shouheng; Yong, Longquan; Deng, Fang’an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application. PMID:28403224

  4. Maximizing the nurses' preferences in nurse scheduling problem: mathematical modeling and a meta-heuristic algorithm

    NASA Astrophysics Data System (ADS)

    Jafari, Hamed; Salmasi, Nasser

    2015-09-01

    The nurse scheduling problem (NSP) has received a great amount of attention in recent years. In the NSP, the goal is to assign shifts to the nurses in order to satisfy the hospital's demand during the planning horizon by considering different objective functions. In this research, we focus on maximizing the nurses' preferences for working shifts and weekends off by considering several important factors such as hospital's policies, labor laws, governmental regulations, and the status of nurses at the end of the previous planning horizon in one of the largest hospitals in Iran i.e., Milad Hospital. Due to the shortage of available nurses, at first, the minimum total number of required nurses is determined. Then, a mathematical programming model is proposed to solve the problem optimally. Since the proposed research problem is NP-hard, a meta-heuristic algorithm based on simulated annealing (SA) is applied to heuristically solve the problem in a reasonable time. An initial feasible solution generator and several novel neighborhood structures are applied to enhance performance of the SA algorithm. Inspired from our observations in Milad hospital, random test problems are generated to evaluate the performance of the SA algorithm. The results of computational experiments indicate that the applied SA algorithm provides solutions with average percentage gap of 5.49 % compared to the upper bounds obtained from the mathematical model. Moreover, the applied SA algorithm provides significantly better solutions in a reasonable time than the schedules provided by the head nurses.

  5. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems.

    PubMed

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.

  6. Bi-criteria travelling salesman subtour problem with time threshold

    NASA Astrophysics Data System (ADS)

    Kumar Thenepalle, Jayanth; Singamsetty, Purusotham

    2018-03-01

    This paper deals with the bi-criteria travelling salesman subtour problem with time threshold (BTSSP-T), which comes from the family of the travelling salesman problem (TSP) and is NP-hard in the strong sense. The problem arises in several application domains, mainly in routing and scheduling contexts. Here, the model focuses on two criteria: total travel distance and gains attained. The BTSSP-T aims to determine a subtour that starts and ends at the same city and visits a subset of cities at a minimum travel distance with maximum gains, such that the time spent on the tour does not exceed the predefined time threshold. A zero-one integer-programming problem is adopted to formulate this model with all practical constraints, and it includes a finite set of feasible solutions (one for each tour). Two algorithms, namely, the Lexi-Search Algorithm (LSA) and the Tabu Search (TS) algorithm have been developed to solve the BTSSP-T problem. The proposed LSA implicitly enumerates the feasible patterns and provides an efficient solution with backtracking, whereas the TS, which is metaheuristic, will give the better approximate solution. A numerical example is demonstrated in order to understand the search mechanism of the LSA. Numerical experiments are carried out in the MATLAB environment, on the different benchmark instances available in the TSPLIB domain as well as on randomly generated test instances. The experimental results show that the proposed LSA works better than the TS algorithm in terms of solution quality and, computationally, both LSA and TS are competitive.

  7. Kinetics of the formation of a protein corona around nanoparticles.

    PubMed

    Zhdanov, Vladimir P; Cho, Nam-Joon

    2016-12-01

    Interaction of metal or oxide nanoparticles (NPs) with biological soft matter is one of the central phenomena in basic and applied biology-oriented nanoscience. Often, this interaction includes adsorption of suspended proteins on the NP surface, resulting in the formation of the protein corona around NPs. Structurally, the corona contains a "hard" monolayer shell directly contacting a NP and a more distant weakly associated "soft" shell. Chemically, the corona is typically composed of a mixture of distinct proteins. The corresponding experimental and theoretical studies have already clarified many aspects of the corona formation. The process is, however, complex, and its understanding is still incomplete. Herein, we present a kinetic mean-field model of the formation of the "hard" corona with emphasis on the role of (i) protein-diffusion limitations and (ii) interplay between competitive adsorption of distinct proteins and irreversible reconfiguration of their native structure. The former factor is demonstrated to be significant only in the very beginning of the corona formation. The latter factor is predicted to be more important. It may determine the composition of the corona on the time scales comparable or longer than a few hours. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. A Systems Analysis View of the Vietnam War: 1965-1972. Volume 2. Forces and Manpower

    DTIC Science & Technology

    1975-02-18

    since it will form the basis for much co~ment and discussion about how wen the war is going and how hard the VC are being pushed. The study was...infiltrators into the South rapidly. Such a move would be itenr depafture froa past infiltration patterns and would signal no chana In Hanoi’s hard line...Uwel " 83,82 83,880 0S~ 59r9 I 8q261.1149 Tvta 540576 .647/ 53,14 1,819 Tot •eal Np . as of Argust 1, 3968, leos civilaalaation coWleted as of June 30

  9. The median problems on linear multichromosomal genomes: graph representation and fast exact solutions.

    PubMed

    Xu, Andrew Wei

    2010-09-01

    In genome rearrangement, given a set of genomes G and a distance measure d, the median problem asks for another genome q that minimizes the total distance [Formula: see text]. This is a key problem in genome rearrangement based phylogenetic analysis. Although this problem is known to be NP-hard, we have shown in a previous article, on circular genomes and under the DCJ distance measure, that a family of patterns in the given genomes--represented by adequate subgraphs--allow us to rapidly find exact solutions to the median problem in a decomposition approach. In this article, we extend this result to the case of linear multichromosomal genomes, in order to solve more interesting problems on eukaryotic nuclear genomes. A multi-way capping problem in the linear multichromosomal case imposes an extra computational challenge on top of the difficulty in the circular case, and this difficulty has been underestimated in our previous study and is addressed in this article. We represent the median problem by the capped multiple breakpoint graph, extend the adequate subgraphs into the capped adequate subgraphs, and prove optimality-preserving decomposition theorems, which give us the tools to solve the median problem and the multi-way capping optimization problem together. We also develop an exact algorithm ASMedian-linear, which iteratively detects instances of (capped) adequate subgraphs and decomposes problems into subproblems. Tested on simulated data, ASMedian-linear can rapidly solve most problems with up to several thousand genes, and it also can provide optimal or near-optimal solutions to the median problem under the reversal/HP distance measures. ASMedian-linear is available at http://sites.google.com/site/andrewweixu .

  10. Verifying the Presence of Low Levels of Neptunium in a Uranium Matrix with Electron Energy-Loss Spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buck, Edgar C.; Douglas, Matthew; Wittman, Richard S.

    2010-01-01

    This paper examines the problems associated with the analysis of low levels of neptunium (Np) in a uranium (U) matrix with electron energy-loss spectroscopy (EELS) on the transmission electron microscope (TEM). The detection of Np in a matrix of uranium (U) can be impeded by the occurrence of a plural scattering event from U (U-M5 + U-O4,5) that results in severe overlap on the Np-M5 edge at 3665 eV. Low levels (1600 - 6300 ppm) of Np can be detected in U solids by confirming the energy gap between the Np-M5 and Np-M4 edges is at 184 eV and showingmore » that the M4/M5 ratio for the Np is smaller than that for U. The Richardson-Lucy deconvolution method was applied to energy-loss spectral images and was shown to increase the signal to noise. This method also improves the limits of detection for Np in a U matrix.« less

  11. NXT1, a Novel Influenza A NP Binding Protein, Promotes the Nuclear Export of NP via a CRM1-Dependent Pathway.

    PubMed

    Chutiwitoonchai, Nopporn; Aida, Yoko

    2016-07-28

    Influenza remains a serious worldwide public health problem. After infection, viral genomic RNA is replicated in the nucleus and packed into viral ribonucleoprotein, which will then be exported to the cytoplasm via a cellular chromosome region maintenance 1 (CRM1)-dependent pathway for further assembly and budding. However, the nuclear export mechanism of influenza virus remains controversial. Here, we identify cellular nuclear transport factor 2 (NTF2)-like export protein 1 (NXT1) as a novel binding partner of nucleoprotein (NP) that stimulates NP-mediated nuclear export via the CRM1-dependent pathway. NXT1-knockdown cells exhibit decreased viral replication kinetics and nuclear accumulated viral RNA and NP. By contrast, NXT1 overexpression promotes nuclear export of NP in a CRM1-dependent manner. Pull-down assays suggest the formation of an NXT1, NP, and CRM1 complex, and demonstrate that NXT1 binds to the C-terminal region of NP. These findings reveal a distinct mechanism for nuclear export of the influenza virus and identify the NXT1/NP interaction as a potential target for antiviral drug development.

  12. Triangular Alignment (TAME). A Tensor-based Approach for Higher-order Network Alignment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammadi, Shahin; Gleich, David F.; Kolda, Tamara G.

    2015-11-01

    Network alignment is an important tool with extensive applications in comparative interactomics. Traditional approaches aim to simultaneously maximize the number of conserved edges and the underlying similarity of aligned entities. We propose a novel formulation of the network alignment problem that extends topological similarity to higher-order structures and provide a new objective function that maximizes the number of aligned substructures. This objective function corresponds to an integer programming problem, which is NP-hard. Consequently, we approximate this objective function as a surrogate function whose maximization results in a tensor eigenvalue problem. Based on this formulation, we present an algorithm called Triangularmore » AlignMEnt (TAME), which attempts to maximize the number of aligned triangles across networks. We focus on alignment of triangles because of their enrichment in complex networks; however, our formulation and resulting algorithms can be applied to general motifs. Using a case study on the NAPABench dataset, we show that TAME is capable of producing alignments with up to 99% accuracy in terms of aligned nodes. We further evaluate our method by aligning yeast and human interactomes. Our results indicate that TAME outperforms the state-of-art alignment methods both in terms of biological and topological quality of the alignments.« less

  13. A New Path-Constrained Rendezvous Planning Approach for Large-Scale Event-Driven Wireless Sensor Networks

    PubMed Central

    Zhang, Gongxuan; Wang, Yongli; Wang, Tianshu

    2018-01-01

    We study the problem of employing a mobile-sink into a large-scale Event-Driven Wireless Sensor Networks (EWSNs) for the purpose of data harvesting from sensor-nodes. Generally, this employment improves the main weakness of WSNs that is about energy-consumption in battery-driven sensor-nodes. The main motivation of our work is to address challenges which are related to a network’s topology by adopting a mobile-sink that moves in a predefined trajectory in the environment. Since, in this fashion, it is not possible to gather data from sensor-nodes individually, we adopt the approach of defining some of the sensor-nodes as Rendezvous Points (RPs) in the network. We argue that RP-planning in this case is a tradeoff between minimizing the number of RPs while decreasing the number of hops for a sensor-node that needs data transformation to the related RP which leads to minimizing average energy consumption in the network. We address the problem by formulating the challenges and expectations as a Mixed Integer Linear Programming (MILP). Henceforth, by proving the NP-hardness of the problem, we propose three effective and distributed heuristics for RP-planning, identifying sojourn locations, and constructing routing trees. Finally, experimental results prove the effectiveness of our approach. PMID:29734718

  14. Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems

    PubMed Central

    Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo

    2015-01-01

    Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016

  15. Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny

    NASA Astrophysics Data System (ADS)

    Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell

    Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.

  16. A New Path-Constrained Rendezvous Planning Approach for Large-Scale Event-Driven Wireless Sensor Networks.

    PubMed

    Vajdi, Ahmadreza; Zhang, Gongxuan; Zhou, Junlong; Wei, Tongquan; Wang, Yongli; Wang, Tianshu

    2018-05-04

    We study the problem of employing a mobile-sink into a large-scale Event-Driven Wireless Sensor Networks (EWSNs) for the purpose of data harvesting from sensor-nodes. Generally, this employment improves the main weakness of WSNs that is about energy-consumption in battery-driven sensor-nodes. The main motivation of our work is to address challenges which are related to a network’s topology by adopting a mobile-sink that moves in a predefined trajectory in the environment. Since, in this fashion, it is not possible to gather data from sensor-nodes individually, we adopt the approach of defining some of the sensor-nodes as Rendezvous Points (RPs) in the network. We argue that RP-planning in this case is a tradeoff between minimizing the number of RPs while decreasing the number of hops for a sensor-node that needs data transformation to the related RP which leads to minimizing average energy consumption in the network. We address the problem by formulating the challenges and expectations as a Mixed Integer Linear Programming (MILP). Henceforth, by proving the NP-hardness of the problem, we propose three effective and distributed heuristics for RP-planning, identifying sojourn locations, and constructing routing trees. Finally, experimental results prove the effectiveness of our approach.

  17. Experiment design for nonparametric models based on minimizing Bayes Risk: application to voriconazole¹.

    PubMed

    Bayard, David S; Neely, Michael

    2017-04-01

    An experimental design approach is presented for individualized therapy in the special case where the prior information is specified by a nonparametric (NP) population model. Here, a NP model refers to a discrete probability model characterized by a finite set of support points and their associated weights. An important question arises as to how to best design experiments for this type of model. Many experimental design methods are based on Fisher information or other approaches originally developed for parametric models. While such approaches have been used with some success across various applications, it is interesting to note that they largely fail to address the fundamentally discrete nature of the NP model. Specifically, the problem of identifying an individual from a NP prior is more naturally treated as a problem of classification, i.e., to find a support point that best matches the patient's behavior. This paper studies the discrete nature of the NP experiment design problem from a classification point of view. Several new insights are provided including the use of Bayes Risk as an information measure, and new alternative methods for experiment design. One particular method, denoted as MMopt (multiple-model optimal), will be examined in detail and shown to require minimal computation while having distinct advantages compared to existing approaches. Several simulated examples, including a case study involving oral voriconazole in children, are given to demonstrate the usefulness of MMopt in pharmacokinetics applications.

  18. Land-Atmosphere Interactions: Successes, Problems and Prospects

    NASA Technical Reports Server (NTRS)

    Sud, Y. C.; Mocko, D. M.

    1999-01-01

    After two decades of active research, a much better understanding of the broader role of biospheric processes on the local climate has emerged. A surface-albedo increase, particularly in desert border regions of the subtropics (as well as the deforested tropical regions), leads to a net surface energy deficit, which in turn leads to a relative sinking and reduced rainfall. On the other hand, studies of the influence of altered ratios of evapotranspiration and sensible fluxes, in situations where the net solar income is unchanged, show that evapotranspiration is a more desirable flux for increased precipitation and vitality of the biosphere. Besides providing water vapor and convective available potential energy (CAPE) to the lower troposphere, evapotranspiration helps in building larger CAPE before "turning on" the moist-convection. Larger CAPE in the lower troposphere enables convection to reach into the deeper atmosphere thereby heating the upper troposphere; indeed, moist-convection is also accompanied by the evaporation of falling precipitation that cools and moistens the lower atmosphere. While convective, as opposed to stratiform, precipitation reduces the fractional cloud cover; it also allows more solar radiation to reach the surface thereby invigorating surface fluxes. These, together with moist convection and associated downdrafts help to maintain the characteristic upper temperature limit(s) of the moist-land as well as oceanic regions. Regardless of the above understanding, several important problems continue to hinder the accurate simulation of a realistic land atmosphere interaction in a numerical model (both GCM and/or Meso-scale models). Among the unsolved problems are parameterization of sub-grid scale land processes that include small-scale variability of soil moisture, snow-cover and snow-physics, the biodiversity of the biosphere, orography, local drainage characteristics under natural conditions, and surface flow over the natural terrain. A well-known non-linear response of surface fluxes to these variations makes the problem of parameterizing land-atmosphere interaction processes hard-to-address and simulate, particularly in a GCM. In our presentation, we will discuss how orographic, snow-cover, and water table interactions can be included into a Simple Biosphere Model such as SiB/SSiB. Figure I shows how, in the Russian region, spring snowmelt affects the soil moisture profile. Corresponding figure 2 shows how interaction with the water table decreases the natural evapotranspiration in the Sahel region simulation. While these simulations need better validation with data, the simulations reveal that surface processes are sensitive to these parameterizations. With these developments, we continue to advance our understanding of the interaction of land with the atmosphere aloft, but the intrinsic variability of the newer parameters, e. g., hydraulic properties of the soil, diminish the positive influences of these advances on the improved climate simulation with GCMs.

  19. Dense Subgraphs with Restrictions and Applications to Gene Annotation Graphs

    NASA Astrophysics Data System (ADS)

    Saha, Barna; Hoch, Allison; Khuller, Samir; Raschid, Louiqa; Zhang, Xiao-Ning

    In this paper, we focus on finding complex annotation patterns representing novel and interesting hypotheses from gene annotation data. We define a generalization of the densest subgraph problem by adding an additional distance restriction (defined by a separate metric) to the nodes of the subgraph. We show that while this generalization makes the problem NP-hard for arbitrary metrics, when the metric comes from the distance metric of a tree, or an interval graph, the problem can be solved optimally in polynomial time. We also show that the densest subgraph problem with a specified subset of vertices that have to be included in the solution can be solved optimally in polynomial time. In addition, we consider other extensions when not just one solution needs to be found, but we wish to list all subgraphs of almost maximum density as well. We apply this method to a dataset of genes and their annotations obtained from The Arabidopsis Information Resource (TAIR). A user evaluation confirms that the patterns found in the distance restricted densest subgraph for a dataset of photomorphogenesis genes are indeed validated in the literature; a control dataset validates that these are not random patterns. Interestingly, the complex annotation patterns potentially lead to new and as yet unknown hypotheses. We perform experiments to determine the properties of the dense subgraphs, as we vary parameters, including the number of genes and the distance.

  20. An evolutionary strategy based on partial imitation for solving optimization problems

    NASA Astrophysics Data System (ADS)

    Javarone, Marco Alberto

    2016-12-01

    In this work we introduce an evolutionary strategy to solve combinatorial optimization tasks, i.e. problems characterized by a discrete search space. In particular, we focus on the Traveling Salesman Problem (TSP), i.e. a famous problem whose search space grows exponentially, increasing the number of cities, up to becoming NP-hard. The solutions of the TSP can be codified by arrays of cities, and can be evaluated by fitness, computed according to a cost function (e.g. the length of a path). Our method is based on the evolution of an agent population by means of an imitative mechanism, we define 'partial imitation'. In particular, agents receive a random solution and then, interacting among themselves, may imitate the solutions of agents with a higher fitness. Since the imitation mechanism is only partial, agents copy only one entry (randomly chosen) of another array (i.e. solution). In doing so, the population converges towards a shared solution, behaving like a spin system undergoing a cooling process, i.e. driven towards an ordered phase. We highlight that the adopted 'partial imitation' mechanism allows the population to generate solutions over time, before reaching the final equilibrium. Results of numerical simulations show that our method is able to find, in a finite time, both optimal and suboptimal solutions, depending on the size of the considered search space.

  1. Sparse Regression as a Sparse Eigenvalue Problem

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback; Gruber, Amit; Weiss, Yair; Avidan, Shai

    2008-01-01

    We extend the l0-norm "subspectral" algorithms for sparse-LDA [5] and sparse-PCA [6] to general quadratic costs such as MSE in linear (kernel) regression. The resulting "Sparse Least Squares" (SLS) problem is also NP-hard, by way of its equivalence to a rank-1 sparse eigenvalue problem (e.g., binary sparse-LDA [7]). Specifically, for a general quadratic cost we use a highly-efficient technique for direct eigenvalue computation using partitioned matrix inverses which leads to dramatic x103 speed-ups over standard eigenvalue decomposition. This increased efficiency mitigates the O(n4) scaling behaviour that up to now has limited the previous algorithms' utility for high-dimensional learning problems. Moreover, the new computation prioritizes the role of the less-myopic backward elimination stage which becomes more efficient than forward selection. Similarly, branch-and-bound search for Exact Sparse Least Squares (ESLS) also benefits from partitioned matrix inverse techniques. Our Greedy Sparse Least Squares (GSLS) generalizes Natarajan's algorithm [9] also known as Order-Recursive Matching Pursuit (ORMP). Specifically, the forward half of GSLS is exactly equivalent to ORMP but more efficient. By including the backward pass, which only doubles the computation, we can achieve lower MSE than ORMP. Experimental comparisons to the state-of-the-art LARS algorithm [3] show forward-GSLS is faster, more accurate and more flexible in terms of choice of regularization

  2. MEvoLib v1.0: the first molecular evolution library for Python.

    PubMed

    Álvarez-Jarreta, Jorge; Ruiz-Pesini, Eduardo

    2016-10-28

    Molecular evolution studies involve many different hard computational problems solved, in most cases, with heuristic algorithms that provide a nearly optimal solution. Hence, diverse software tools exist for the different stages involved in a molecular evolution workflow. We present MEvoLib, the first molecular evolution library for Python, providing a framework to work with different tools and methods involved in the common tasks of molecular evolution workflows. In contrast with already existing bioinformatics libraries, MEvoLib is focused on the stages involved in molecular evolution studies, enclosing the set of tools with a common purpose in a single high-level interface with fast access to their frequent parameterizations. The gene clustering from partial or complete sequences has been improved with a new method that integrates accessible external information (e.g. GenBank's features data). Moreover, MEvoLib adjusts the fetching process from NCBI databases to optimize the download bandwidth usage. In addition, it has been implemented using parallelization techniques to cope with even large-case scenarios. MEvoLib is the first library for Python designed to facilitate molecular evolution researches both for expert and novel users. Its unique interface for each common task comprises several tools with their most used parameterizations. It has also included a method to take advantage of biological knowledge to improve the gene partition of sequence datasets. Additionally, its implementation incorporates parallelization techniques to enhance computational costs when handling very large input datasets.

  3. Interaction study of rice stripe virus proteins reveals a region of the nucleocapsid protein (NP) required for NP self-interaction and nuclear localization.

    PubMed

    Lian, Sen; Cho, Won Kyong; Jo, Yeonhwa; Kim, Sang-Min; Kim, Kook-Hyung

    2014-04-01

    Rice stripe virus (RSV), which belongs to the genus Tenuivirus, is an emergent virus problem. The RSV genome is composed of four single-strand RNAs (RNA1-RNA4) and encodes seven proteins. We investigated interactions between six of the RSV proteins by yeast-two hybrid (Y2H) assay in vitro and by bimolecular fluorescence complementation (BiFC) in planta. Y2H identified self-interaction of the nucleocapsid protein (NP) and NS3, while BiFC revealed self-interaction of NP, NS3, and NCP. To identify regions(s) and/or crucial amino acid (aa) residues required for NP self-interaction, we generated various truncated and aa substitution mutants. Y2H assay showed that the N-terminal region of NP (aa 1-56) is necessary for NP self-interaction. Further analysis with substitution mutants demonstrated that additional aa residues located at 42-47 affected their interaction with full-length NP. These results indicate that the N-terminal region (aa 1-36 and 42-47) is required for NP self-interaction. BiFC and co-localization studies showed that the region required for NP self-interaction is also required for NP localization at the nucleus. Overall, our results indicate that the N-terminal region (aa 1-47) of the NP is important for NP self-interaction and that six aa residues (42-47) are essential for both NP self-interaction and nuclear localization. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Quantum speedup in solving the maximal-clique problem

    NASA Astrophysics Data System (ADS)

    Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang

    2018-03-01

    The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.

  5. A study of South Korean casino employees and gambling problems.

    PubMed

    Lee, Tae Kyung; Labrie, Richard A; Rhee, Hak Seung; Shaffer, Howard J

    2008-05-01

    Casino employees are exposed to disproportionately high levels of gambling, drinking and smoking compared to other occupations. Because of their occupation, they have the opportunity to detect and prevent pathological gambling (PG). To identify differences in the mental health status and social attitudes towards PG among casino workers in South Korea depending upon whether they report any gambling problems. Data were collected from 388 full-time casino employees. This data provided information about the prevalence of gambling problems, alcohol and tobacco use and depression. Employees were grouped according to their scores on the Korean version of South Oaks Gambling Screen (SOGS), and those employees who gambled without experiencing any gambling problems (Group NP: SOGS = 0) and those who reported any gambling problems (Group P: SOGS > 0) were compared. An exploratory factor analyses identified the domains of casino employee social attitudes towards gambling. Employees who reported gambling problems (Group P) reported a higher prevalence of smoking, alcohol problems and depression (P < 0.01) compared to employees who did not report gambling problems (Group NP). The primary employee social attitude towards gambling was identified by the factor of 'Disease concept/social awareness'. Group NP reported more positive attitudes in this domain than Group P (P < 0.01). Employees who reported any gambling problems reported a less positive attitude towards developing the public health system to be responsive to gambling problems. These findings indicate a need to develop health education programmes that focus more specifically on casino employees with gambling problems.

  6. Sign of the Singlet ( np)-Scattering Length, Neutron Radiative Capture by the Proton and Problem of the Virtual Level of the ( np) System

    NASA Astrophysics Data System (ADS)

    Lyuboshitz, V. L.; Lyuboshitz, V. V.

    2011-05-01

    It is shown that, taking into account the process of neutron radiative capture by the proton and the negative sign of the length of singlet ( np)-scattering ( a s = - f s (0) < 0), the singlet ( np)-scattering amplitude f s has a pole at a complex energy {widetilde{E}_s}, the real part of which is negative ({Re widetilde{E}_s < 0}) and the imaginary part is positive ({Im widetilde{E}_s > 0}). This means that a singlet state of the ( np) system, which would decay into the deuteron in the ground state and the γ quantum ("singlet deuteron") does not exist, and the pole {widetilde{E}_s} corresponds to the virtual but not true quasistationary level.

  7. Validation of Cross Sections with Criticality Experiment and Reaction Rates: the Neptunium Case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Berthier, B.; Le Naour, C.; Stéphan, C.; Paradela, C.; Tarrío, D.; Duran, I.

    2014-04-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurements the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of the n_TOF data, we considered a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by uranium highly enriched in 235U so as to approach criticality with fast neutrons. The multiplication factor keff of the calculation is in better agreement with the experiment when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explored the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. The large modification needed to reduce the deviation seems to be incompatible with existing inelastic cross section measurements. Also we show that the νbar of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  8. Criticality experiments and benchmarks for cross section evaluation: the neptunium case

    NASA Astrophysics Data System (ADS)

    Leong, L. S.; Tassan-Got, L.; Audouin, L.; Paradela, C.; Wilson, J. N.; Tarrio, D.; Berthier, B.; Duran, I.; Le Naour, C.; Stéphan, C.

    2013-03-01

    The 237Np neutron-induced fission cross section has been recently measured in a large energy range (from eV to GeV) at the n_TOF facility at CERN. When compared to previous measurement the n_TOF fission cross section appears to be higher by 5-7% beyond the fission threshold. To check the relevance of n_TOF data, we apply a criticality experiment performed at Los Alamos with a 6 kg sphere of 237Np, surrounded by enriched uranium 235U so as to approach criticality with fast neutrons. The multiplication factor ke f f of the calculation is in better agreement with the experiment (the deviation of 750 pcm is reduced to 250 pcm) when we replace the ENDF/B-VII.0 evaluation of the 237Np fission cross section by the n_TOF data. We also explore the hypothesis of deficiencies of the inelastic cross section in 235U which has been invoked by some authors to explain the deviation of 750 pcm. With compare to inelastic large distortion calculation, it is incompatible with existing measurements. Also we show that the v of 237Np can hardly be incriminated because of the high accuracy of the existing data. Fission rate ratios or averaged fission cross sections measured in several fast neutron fields seem to give contradictory results on the validation of the 237Np cross section but at least one of the benchmark experiments, where the active deposits have been well calibrated for the number of atoms, favors the n_TOF data set. These outcomes support the hypothesis of a higher fission cross section of 237Np.

  9. Polynomial-time solution of prime factorization and NP-complete problems with digital memcomputing machines

    NASA Astrophysics Data System (ADS)

    Traversa, Fabio L.; Di Ventra, Massimiliano

    2017-02-01

    We introduce a class of digital machines, we name Digital Memcomputing Machines, (DMMs) able to solve a wide range of problems including Non-deterministic Polynomial (NP) ones with polynomial resources (in time, space, and energy). An abstract DMM with this power must satisfy a set of compatible mathematical constraints underlying its practical realization. We prove this by making a connection with the dynamical systems theory. This leads us to a set of physical constraints for poly-resource resolvability. Once the mathematical requirements have been assessed, we propose a practical scheme to solve the above class of problems based on the novel concept of self-organizing logic gates and circuits (SOLCs). These are logic gates and circuits able to accept input signals from any terminal, without distinction between conventional input and output terminals. They can solve boolean problems by self-organizing into their solution. They can be fabricated either with circuit elements with memory (such as memristors) and/or standard MOS technology. Using tools of functional analysis, we prove mathematically the following constraints for the poly-resource resolvability: (i) SOLCs possess a global attractor; (ii) their only equilibrium points are the solutions of the problems to solve; (iii) the system converges exponentially fast to the solutions; (iv) the equilibrium convergence rate scales at most polynomially with input size. We finally provide arguments that periodic orbits and strange attractors cannot coexist with equilibria. As examples, we show how to solve the prime factorization and the search version of the NP-complete subset-sum problem. Since DMMs map integers into integers, they are robust against noise and hence scalable. We finally discuss the implications of the DMM realization through SOLCs to the NP = P question related to constraints of poly-resources resolvability.

  10. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  11. Improved hybrid optimization algorithm for 3D protein structure prediction.

    PubMed

    Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang

    2014-07-01

    A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.

  12. A methodological proposal for the development of an HPC-based antenna array scheduler

    NASA Astrophysics Data System (ADS)

    Bonvallet, Roberto; Hoffstadt, Arturo; Herrera, Diego; López, Daniela; Gregorio, Rodrigo; Almuna, Manuel; Hiriart, Rafael; Solar, Mauricio

    2010-07-01

    As new astronomy projects choose interferometry to improve angular resolution and to minimize costs, preparing and optimizing schedules for an antenna array becomes an increasingly critical task. This problem shares similarities with the job-shop problem, which is known to be a NP-hard problem, making a complete approach infeasible. In the case of ALMA, 18000 projects per season are expected, and the best schedule must be found in the order of minutes. The problem imposes severe difficulties: the large domain of observation projects to be taken into account; a complex objective function, composed of several abstract, environmental, and hardware constraints; the number of restrictions imposed and the dynamic nature of the problem, as weather is an ever-changing variable. A solution can benefit from the use of High-Performance Computing for the final implementation to be deployed, but also for the development process. Our research group proposes the use of both metaheuristic search and statistical learning algorithms, in order to create schedules in a reasonable time. How these techniques will be applied is yet to be determined as part of the ongoing research. Several algorithms need to be implemented, tested and evaluated by the team. This work presents the methodology proposed to lead the development of the scheduler. The basic functionality is encapsulated into software components implemented on parallel architectures. These components expose a domain-level interface to the researchers, enabling then to develop early prototypes for evaluating and comparing their proposed techniques.

  13. Learning oncogenetic networks by reducing to mixed integer linear programming.

    PubMed

    Shahrabi Farahani, Hossein; Lagergren, Jens

    2013-01-01

    Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.

  14. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less

  15. Sparse signals recovered by non-convex penalty in quasi-linear systems.

    PubMed

    Cui, Angang; Li, Haiyang; Wen, Meng; Peng, Jigen

    2018-01-01

    The goal of compressed sensing is to reconstruct a sparse signal under a few linear measurements far less than the dimension of the ambient space of the signal. However, many real-life applications in physics and biomedical sciences carry some strongly nonlinear structures, and the linear model is no longer suitable. Compared with the compressed sensing under the linear circumstance, this nonlinear compressed sensing is much more difficult, in fact also NP-hard, combinatorial problem, because of the discrete and discontinuous nature of the [Formula: see text]-norm and the nonlinearity. In order to get a convenience for sparse signal recovery, we set the nonlinear models have a smooth quasi-linear nature in this paper, and study a non-convex fraction function [Formula: see text] in this quasi-linear compressed sensing. We propose an iterative fraction thresholding algorithm to solve the regularization problem [Formula: see text] for all [Formula: see text]. With the change of parameter [Formula: see text], our algorithm could get a promising result, which is one of the advantages for our algorithm compared with some state-of-art algorithms. Numerical experiments show that our method performs much better than some state-of-the-art methods.

  16. A hybrid quantum-inspired genetic algorithm for multiobjective flow shop scheduling.

    PubMed

    Li, Bin-Bin; Wang, Ling

    2007-06-01

    This paper proposes a hybrid quantum-inspired genetic algorithm (HQGA) for the multiobjective flow shop scheduling problem (FSSP), which is a typical NP-hard combinatorial optimization problem with strong engineering backgrounds. On the one hand, a quantum-inspired GA (QGA) based on Q-bit representation is applied for exploration in the discrete 0-1 hyperspace by using the updating operator of quantum gate and genetic operators of Q-bit. Moreover, random-key representation is used to convert the Q-bit representation to job permutation for evaluating the objective values of the schedule solution. On the other hand, permutation-based GA (PGA) is applied for both performing exploration in permutation-based scheduling space and stressing exploitation for good schedule solutions. To evaluate solutions in multiobjective sense, a randomly weighted linear-sum function is used in QGA, and a nondominated sorting technique including classification of Pareto fronts and fitness assignment is applied in PGA with regard to both proximity and diversity of solutions. To maintain the diversity of the population, two trimming techniques for population are proposed. The proposed HQGA is tested based on some multiobjective FSSPs. Simulation results and comparisons based on several performance metrics demonstrate the effectiveness of the proposed HQGA.

  17. Analysis of travelling salesman problem

    NASA Astrophysics Data System (ADS)

    Ahmed, Belal; Singh Chouhan, Shivank; Biswas, Subham; Gayathri, P.; Santhi, H.

    2017-11-01

    The multiple Traveling Salesman Problem (mTSP) is the general type of TSP, in which at least one than one sales representatives can be utilized as a part of the arrangement set. The Constraint in the improvement undertaking is that every sales representative comes back to beginning stage at end of outing, heading out to a particular arrangement of urban areas in the middle of and with the exception of the first, every last city is gone to by precisely one sales representative. The thought is to scan for the briefest course that is the slightest separation required for every salesperson to go from the beginning area to individual urban areas and back to the area from where he has begun. It is an intricate NP-Hard issue and has different applications for the most part in the field of planning and steering. The measure of algorithm time to take care of this issue develops exponentially as number of urban areas builds thus, the meta-heuristic streamlining algorithms, for example, Genetic Algorithm (GAs) are should have been investigated. The objective of this paper is to discover different algorithms utilized as a part of writing to understand mTSP.

  18. Social Milieu Oriented Routing: A New Dimension to Enhance Network Security in WSNs.

    PubMed

    Liu, Lianggui; Chen, Li; Jia, Huiling

    2016-02-19

    In large-scale wireless sensor networks (WSNs), in order to enhance network security, it is crucial for a trustor node to perform social milieu oriented routing to a target a trustee node to carry out trust evaluation. This challenging social milieu oriented routing with more than one end-to-end Quality of Trust (QoT) constraint has proved to be NP-complete. Heuristic algorithms with polynomial and pseudo-polynomial-time complexities are often used to deal with this challenging problem. However, existing solutions cannot guarantee the efficiency of searching; that is, they can hardly avoid obtaining partial optimal solutions during a searching process. Quantum annealing (QA) uses delocalization and tunneling to avoid falling into local minima without sacrificing execution time. This has been proven a promising way to many optimization problems in recently published literatures. In this paper, for the first time, with the help of a novel approach, that is, configuration path-integral Monte Carlo (CPIMC) simulations, a QA-based optimal social trust path (QA_OSTP) selection algorithm is applied to the extraction of the optimal social trust path in large-scale WSNs. Extensive experiments have been conducted, and the experiment results demonstrate that QA_OSTP outperforms its heuristic opponents.

  19. Quantum Matching Theory (with new complexity-theoretic, combinatorial and topical insights on the nature of the quantum entanglement)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gurvits, L.

    2002-01-01

    Classical matching theory can be defined in terms of matrices with nonnegative entries. The notion of Positive operator, central in Quantum Theory, is a natural generalization of matrices with non-negative entries. Based on this point of view, we introduce a definition of perfect Quantum (operator) matching. We show that the new notion inherits many 'classical' properties, but not all of them. This new notion goes somewhere beyound matroids. For separable bipartite quantum states this new notion coinsides with the full rank property of the intersection of two corresponding geometric matroids. In the classical situation, permanents are naturally associated with perfectsmore » matchings. We introduce an analog of permanents for positive operators, called Quantum Permanent and show how this generalization of the permanent is related to the Quantum Entanglement. Besides many other things, Quantum Permanents provide new rational inequalities necessary for the separability of bipartite quantum states. Using Quantum Permanents, we give deterministic poly-time algorithm to solve Hidden Matroids Intersection Problem and indicate some 'classical' complexity difficulties associated with the Quantum Entanglement. Finally, we prove that the weak membership problem for the convex set of separable bipartite density matrices is NP-HARD.« less

  20. Endocytic mechanisms and osteoinductive profile of hydroxyapatite nanoparticles in human umbilical cord Wharton’s jelly-derived mesenchymal stem cells

    PubMed Central

    Zhang, Juan; Wang, Chen

    2018-01-01

    Background As a potentially bioactive material, the widespread application of nanosized hydroxyapatite (nano-HAP) in the field of bone regeneration has increased the risk of human exposure. However, our understanding of the interaction between nano-HAP and stem cells implicated in bone repair remains incomplete. Methods Here, we characterized the adhesion and cellular internalization of HAP nanoparticles (HANPs) with different sizes (20 nm np20 and 80 nm np80) and highlighted the involved pathway in their uptake using human umbilical cord Wharton’s jelly-derived mesenchymal stem cells (hWJ-MSCs). In addition, the effects of HANPs on cell viability, apoptosis response, osteogenic differentiation, and underlying related mechanisms were explored. Results It was shown that both types of HANPs readily adhered to the cellular membrane and were transported into the cells compared to micro-sized HAP particles (m-HAP; 12 μm). Interestingly, the endocytic routes of np20 and np80 differed, although they exhibited similar kinetics of adhesion and uptake. Our study revealed involvement of clathrin- and caveolin-mediated endocytosis as well as macropinocytosis in the np20 uptake. However, for np80, clathrin-mediated endocytosis and some as-yet-unidentified important uptake routes play central roles in their internalization. HANPs displayed a higher preference to accumulate in the cytoplasm compared to m-HAP, and HANPs were not detected in the nucleolus. Exposure to np20 for 24 h caused a decrease in cell viability, while cells completely recovered with an exposure time of 72 h. Furthermore, HANPs did not influence apoptosis and necrosis of hWJ-MSCs. Strikingly, HANPs enhanced mRNA levels of osteoblast-related genes and stimulated calcium mineral deposition, and this directly correlated with the activation in c-Jun N-terminal kinases and p38 pathways. Conclusion Our data provide additional insight about the interactions of HANPs with MSCs and suggest their application potential in hard tissue regeneration. PMID:29559775

  1. Study on the aerobic biodegradability and degradation kinetics of 3-NP; 2,4-DNP and 2,6-DNP.

    PubMed

    She, Zonglian; Xie, Tian; Zhu, Yingjie; Li, Leilei; Tang, Gaifeng; Huang, Jian

    2012-11-30

    Four biodegradability tests (BOD(5)/COD ratio, production of carbon dioxide, relative oxygen uptake rate and relative enzymatic activity) were used to determine the aerobic biodegradability of 3-nitrophenol (3-NP), 2,4-dinitrophenol (2,4-DNP) and 2,6-dinitrophenol (2,6-DNP). Furthermore, biodegradation kinetics of the compounds was investigated in sequencing batch reactors both in the presence of glucose (co-substrate) and with nitrophenol as the sole carbon source. Among the three tested compounds, 3-NP showed the best biodegradability while 2,6-DNP was the most difficult to be biodegraded. The Haldane equation was applied to the kinetic test data of the nitrophenols. The kinetic constants are as follows: the maximum specific degradation rate (K(max)), the saturation constants (K(S)) and the inhibition constants (K(I)) were in the range of 0.005-2.98 mg(mgSS d)(-1), 1.5-51.9 mg L(-1) and 1.8-95.8 mg L(-1), respectively. The presence of glucose enhanced the degradation of the nitrophenols at low glucose concentrations. The degradation of 3-NP was found to be accelerated with the increasing of glucose concentrations from 0 to 660 mg L(-1). At high (1320-2000 mg L(-1)) glucose concentrations, the degradation rate of 3-NP was reduced and the K(max) of 3-NP was even lower than the value obtained in the absence of glucose, suggesting that high concentrations of co-substrate could inhibit 3-NP biodegradation. At 2,4-DNP concentration of 30 mg L(-1), the K(max) of 2,4-DNP with glucose as co-substrate was about 30 times the value with 2,4-DNP as sole substrate. 2,6-DNP preformed high toxicity in the case of sole carbon source degradation and the kinetic data was hardly obtained. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. The effect of the protein corona on the interaction between nanoparticles and lipid bilayers.

    PubMed

    Di Silvio, Desirè; Maccarini, Marco; Parker, Roger; Mackie, Alan; Fragneto, Giovanna; Baldelli Bombelli, Francesca

    2017-10-15

    It is known that nanoparticles (NPs) in a biological fluid are immediately coated by a protein corona (PC), composed of a hard (strongly bounded) and a soft (loosely associated) layers, which represents the real nano-interface interacting with the cellular membrane in vivo. In this regard, supported lipid bilayers (SLB) have extensively been used as relevant model systems for elucidating the interaction between biomembranes and NPs. Herein we show how the presence of a PC on the NP surface changes the interaction between NPs and lipid bilayers with particular care on the effects induced by the NPs on the bilayer structure. In the present work we combined Quartz Crystal Microbalance with Dissipation Monitoring (QCM-D) and Neutron Reflectometry (NR) experimental techniques to elucidate how the NP-membrane interaction is modulated by the presence of proteins in the environment and their effect on the lipid bilayer. Our study showed that the NP-membrane interaction is significantly affected by the presence of proteins and in particular we observed an important role of the soft corona in this phenomenon. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Computation of the phase response curve: a direct numerical approach.

    PubMed

    Govaerts, W; Sautois, B

    2006-04-01

    Neurons are often modeled by dynamical systems--parameterized systems of differential equations. A typical behavioral pattern of neurons is periodic spiking; this corresponds to the presence of stable limit cycles in the dynamical systems model. The phase resetting and phase response curves (PRCs) describe the reaction of the spiking neuron to an input pulse at each point of the cycle. We develop a new method for computing these curves as a by-product of the solution of the boundary value problem for the stable limit cycle. The method is mathematically equivalent to the adjoint method, but our implementation is computationally much faster and more robust than any existing method. In fact, it can compute PRCs even where the limit cycle can hardly be found by time integration, for example, because it is close to another stable limit cycle. In addition, we obtain the discretized phase response curve in a form that is ideally suited for most applications. We present several examples and provide the implementation in a freely available Matlab code.

  4. Optimal stocking of species by diameter class for even-aged mid-to-late rotation Appalachian hardwoods

    Treesearch

    Joseph B. Roise; Joosang Chung; Chris B. LeDoux

    1988-01-01

    Nonlinear programming (NP) is applied to the problem of finding optimal thinning and harvest regimes simultaneously with species mix and diameter class distribution. Optimal results for given cases are reported. Results of the NP optimization are compared with prescriptions developed by Appalachian hardwood silviculturists.

  5. Polynomial-Time Algorithms for Building a Consensus MUL-Tree

    PubMed Central

    Cui, Yun; Jansson, Jesper

    2012-01-01

    Abstract A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host–parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists. PMID:22963134

  6. Intelligent fuzzy approach for fast fractal image compression

    NASA Astrophysics Data System (ADS)

    Nodehi, Ali; Sulong, Ghazali; Al-Rodhaan, Mznah; Al-Dhelaan, Abdullah; Rehman, Amjad; Saba, Tanzila

    2014-12-01

    Fractal image compression (FIC) is recognized as a NP-hard problem, and it suffers from a high number of mean square error (MSE) computations. In this paper, a two-phase algorithm was proposed to reduce the MSE computation of FIC. In the first phase, based on edge property, range and domains are arranged. In the second one, imperialist competitive algorithm (ICA) is used according to the classified blocks. For maintaining the quality of the retrieved image and accelerating algorithm operation, we divided the solutions into two groups: developed countries and undeveloped countries. Simulations were carried out to evaluate the performance of the developed approach. Promising results thus achieved exhibit performance better than genetic algorithm (GA)-based and Full-search algorithms in terms of decreasing the number of MSE computations. The number of MSE computations was reduced by the proposed algorithm for 463 times faster compared to the Full-search algorithm, although the retrieved image quality did not have a considerable change.

  7. Capacity planning of a wide-sense nonblocking generalized survivable network

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2006-06-01

    Generalized survivable networks (GSNs) have two interesting properties that are essential attributes for future backbone networks--full survivability against link failures and support for dynamic traffic demands. GSNs incorporate the nonblocking network concept into the survivable network models. Given a set of nodes and a topology that is at least two-edge connected, a certain minimum capacity is required for each edge to form a GSN. The edge capacity is bounded because each node has an input-output capacity limit that serves as a constraint for any allowable traffic demand matrix. The GSN capacity planning problem is nondeterministic polynomial time (NP) hard. We first give a rigorous mathematical framework; then we offer two different solution approaches. The two-phase approach is fast, but the joint optimization approach yields a better bound. We carried out numerical computations for eight networks with different topologies and found that the cost of a GSN is only a fraction (from 52% to 89%) more than that of a static survivable network.

  8. Polynomial-time algorithms for building a consensus MUL-tree.

    PubMed

    Cui, Yun; Jansson, Jesper; Sung, Wing-Kin

    2012-09-01

    A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host-parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists.

  9. Distance-Based Phylogenetic Methods Around a Polytomy.

    PubMed

    Davidson, Ruth; Sullivant, Seth

    2014-01-01

    Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.

  10. AI techniques for a space application scheduling problem

    NASA Technical Reports Server (NTRS)

    Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.

    1991-01-01

    Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).

  11. Solving a bi-objective mathematical model for location-routing problem with time windows in multi-echelon reverse logistics using metaheuristic procedure

    NASA Astrophysics Data System (ADS)

    Ghezavati, V. R.; Beigi, M.

    2016-12-01

    During the last decade, the stringent pressures from environmental and social requirements have spurred an interest in designing a reverse logistics (RL) network. The success of a logistics system may depend on the decisions of the facilities locations and vehicle routings. The location-routing problem (LRP) simultaneously locates the facilities and designs the travel routes for vehicles among established facilities and existing demand points. In this paper, the location-routing problem with time window (LRPTW) and homogeneous fleet type and designing a multi-echelon, and capacitated reverse logistics network, are considered which may arise in many real-life situations in logistics management. Our proposed RL network consists of hybrid collection/inspection centers, recovery centers and disposal centers. Here, we present a new bi-objective mathematical programming (BOMP) for LRPTW in reverse logistic. Since this type of problem is NP-hard, the non-dominated sorting genetic algorithm II (NSGA-II) is proposed to obtain the Pareto frontier for the given problem. Several numerical examples are presented to illustrate the effectiveness of the proposed model and algorithm. Also, the present work is an effort to effectively implement the ɛ-constraint method in GAMS software for producing the Pareto-optimal solutions in a BOMP. The results of the proposed algorithm have been compared with the ɛ-constraint method. The computational results show that the ɛ-constraint method is able to solve small-size instances to optimality within reasonable computing times, and for medium-to-large-sized problems, the proposed NSGA-II works better than the ɛ-constraint.

  12. Standardization of 237Np by the CIEMAT/NIST LSC tracer method

    PubMed

    Gunther

    2000-03-01

    The standardization of 237Np presents some difficulties: several groups of alpha, beta and gamma radiation, chemical problems with the daughter nuclide 233Pa, an incomplete radioactive equilibrium after sample preparation, high conversion of some gamma transitions. To solve the chemical problems, a sample composition involving the Ultima Gold AB scintillator and a high concentration of HCl is used. Standardization by the CIEMAT/NIST method and by pulse shape discrimination is described. The results agree within 0.1% with those obtained by two other methods.

  13. Genetic algorithms for protein threading.

    PubMed

    Yadgari, J; Amir, A; Unger, R

    1998-01-01

    Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).

  14. On Bipartite Graphs Trees and Their Partial Vertex Covers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caskurlu, Bugra; Mkrtchyan, Vahan; Parekh, Ojas D.

    2015-03-01

    Graphs can be used to model risk management in various systems. Particularly, Caskurlu et al. in [7] have considered a system, which has threats, vulnerabilities and assets, and which essentially represents a tripartite graph. The goal in this model is to reduce the risk in the system below a predefined risk threshold level. One can either restricting the permissions of the users, or encapsulating the system assets. The pointed out two strategies correspond to deleting minimum number of elements corresponding to vulnerabilities and assets, such that the flow between threats and assets is reduced below the predefined threshold level. Itmore » can be shown that the main goal in this risk management system can be formulated as a Partial Vertex Cover problem on bipartite graphs. It is well-known that the Vertex Cover problem is in P on bipartite graphs, however; the computational complexity of the Partial Vertex Cover problem on bipartite graphs has remained open. In this paper, we establish that the Partial Vertex Cover problem is NP-hard on bipartite graphs, which was also recently independently demonstrated [N. Apollonio and B. Simeone, Discrete Appl. Math., 165 (2014), pp. 37–48; G. Joret and A. Vetta, preprint, arXiv:1211.4853v1 [cs.DS], 2012]. We then identify interesting special cases of bipartite graphs, for which the Partial Vertex Cover problem, the closely related Budgeted Maximum Coverage problem, and their weighted extensions can be solved in polynomial time. We also present an 8/9-approximation algorithm for the Budgeted Maximum Coverage problem in the class of bipartite graphs. We show that this matches and resolves the integrality gap of the natural LP relaxation of the problem and improves upon a recent 4/5-approximation.« less

  15. Defining the phenotype of young healthy nucleus pulposus cells: recommendations of the Spine Research Interest Group at the 2014 annual ORS meeting.

    PubMed

    Risbud, Makarand V; Schoepflin, Zachary R; Mwale, Fackson; Kandel, Rita A; Grad, Sibylle; Iatridis, James C; Sakai, Daisuke; Hoyland, Judith A

    2015-03-01

    Low back pain is a major physical and socioeconomic problem. Degeneration of the intervertebral disc and especially that of nucleus pulposus (NP) has been linked to low back pain. In spite of much research focusing on the NP, consensus among the research community is lacking in defining the NP cell phenotype. A consensus agreement will allow easier distinguishing of NP cells from annulus fibrosus (AF) cells and endplate chondrocytes, a better gauge of therapeutic success, and a better guidance of tissue-engineering-based regenerative strategies that attempt to replace lost NP tissue. Most importantly, a clear definition will further the understanding of physiology and function of NP cells, ultimately driving development of novel cell-based therapeutic modalities. The Spine Research Interest Group at the 2014 Annual ORS Meeting in New Orleans convened with the task of compiling a working definition of the NP cell phenotype with hope that a consensus statement will propel disc research forward into the future. Based on evaluation of recent studies describing characteristic NP markers and their physiologic relevance, we make the recommendation of the following healthy NP phenotypic markers: stabilized expression of HIF-1α, GLUT-1, aggrecan/collagen II ratio >20, Shh, Brachyury, KRT18/19, CA12, and CD24. © 2014 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  16. The Impact of Varied Discrimination Parameters on Mixed-Format Item Response Theory Model Selection

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Chang, Wanchen; Dodd, Barbara G.

    2013-01-01

    Whittaker, Chang, and Dodd compared the performance of model selection criteria when selecting among mixed-format IRT models and found that the criteria did not perform adequately when selecting the more parameterized models. It was suggested by M. S. Johnson that the problems when selecting the more parameterized models may be because of the low…

  17. The Crystal Structure and RNA-Binding of an Orthomyxovirus Nucleoprotein

    PubMed Central

    Zheng, Wenjie; Olson, John; Vakharia, Vikram; Tao, Yizhi Jane

    2013-01-01

    Genome packaging for viruses with segmented genomes is often a complex problem. This is particularly true for influenza viruses and other orthomyxoviruses, whose genome consists of multiple negative-sense RNAs encapsidated as ribonucleoprotein (RNP) complexes. To better understand the structural features of orthomyxovirus RNPs that allow them to be packaged, we determined the crystal structure of the nucleoprotein (NP) of a fish orthomyxovirus, the infectious salmon anemia virus (ISAV) (genus Isavirus). As the major protein component of the RNPs, ISAV-NP possesses a bi-lobular structure similar to the influenza virus NP. Because both RNA-free and RNA-bound ISAV NP forms stable dimers in solution, we were able to measure the NP RNA binding affinity as well as the stoichiometry using recombinant proteins and synthetic oligos. Our RNA binding analysis revealed that each ISAV-NP binds ∼12 nts of RNA, shorter than the 24–28 nts originally estimated for the influenza A virus NP based on population average. The 12-nt stoichiometry was further confirmed by results from electron microscopy and dynamic light scattering. Considering that RNPs of ISAV and the influenza viruses have similar morphologies and dimensions, our findings suggest that NP-free RNA may exist on orthomyxovirus RNPs, and selective RNP packaging may be accomplished through direct RNA-RNA interactions. PMID:24068932

  18. An empirical test of a diffusion model: predicting clouded apollo movements in a novel environment.

    PubMed

    Ovaskainen, Otso; Luoto, Miska; Ikonen, Iiro; Rekola, Hanna; Meyke, Evgeniy; Kuussaari, Mikko

    2008-05-01

    Functional connectivity is a fundamental concept in conservation biology because it sets the level of migration and gene flow among local populations. However, functional connectivity is difficult to measure, largely because it is hard to acquire and analyze movement data from heterogeneous landscapes. Here we apply a Bayesian state-space framework to parameterize a diffusion-based movement model using capture-recapture data on the endangered clouded apollo butterfly. We test whether the model is able to disentangle the inherent movement behavior of the species from landscape structure and sampling artifacts, which is a necessity if the model is to be used to examine how movements depend on landscape structure. We show that this is the case by demonstrating that the model, parameterized with data from a reference landscape, correctly predicts movements in a structurally different landscape. In particular, the model helps to explain why a movement corridor that was constructed as a management measure failed to increase movement among local populations. We illustrate how the parameterized model can be used to derive biologically relevant measures of functional connectivity, thus linking movement data with models of spatial population dynamics.

  19. Human Performance on Hard Non-Euclidean Graph Problems: Vertex Cover

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Masson, Michael E. J.; Stege, Ulrike

    2012-01-01

    Recent studies on a computationally hard visual optimization problem, the Traveling Salesperson Problem (TSP), indicate that humans are capable of finding close to optimal solutions in near-linear time. The current study is a preliminary step in investigating human performance on another hard problem, the Minimum Vertex Cover Problem, in which…

  20. Hill-Climbing search and diversification within an evolutionary approach to protein structure prediction.

    PubMed

    Chira, Camelia; Horvath, Dragos; Dumitrescu, D

    2011-07-30

    Proteins are complex structures made of amino acids having a fundamental role in the correct functioning of living cells. The structure of a protein is the result of the protein folding process. However, the general principles that govern the folding of natural proteins into a native structure are unknown. The problem of predicting a protein structure with minimum-energy starting from the unfolded amino acid sequence is a highly complex and important task in molecular and computational biology. Protein structure prediction has important applications in fields such as drug design and disease prediction. The protein structure prediction problem is NP-hard even in simplified lattice protein models. An evolutionary model based on hill-climbing genetic operators is proposed for protein structure prediction in the hydrophobic - polar (HP) model. Problem-specific search operators are implemented and applied using a steepest-ascent hill-climbing approach. Furthermore, the proposed model enforces an explicit diversification stage during the evolution in order to avoid local optimum. The main features of the resulting evolutionary algorithm - hill-climbing mechanism and diversification strategy - are evaluated in a set of numerical experiments for the protein structure prediction problem to assess their impact to the efficiency of the search process. Furthermore, the emerging consolidated model is compared to relevant algorithms from the literature for a set of difficult bidimensional instances from lattice protein models. The results obtained by the proposed algorithm are promising and competitive with those of related methods.

  1. Effective hybrid teaching-learning-based optimization algorithm for balancing two-sided assembly lines with multiple constraints

    NASA Astrophysics Data System (ADS)

    Tang, Qiuhua; Li, Zixiang; Zhang, Liping; Floudas, C. A.; Cao, Xiaojun

    2015-09-01

    Due to the NP-hardness of the two-sided assembly line balancing (TALB) problem, multiple constraints existing in real applications are less studied, especially when one task is involved with several constraints. In this paper, an effective hybrid algorithm is proposed to address the TALB problem with multiple constraints (TALB-MC). Considering the discrete attribute of TALB-MC and the continuous attribute of the standard teaching-learning-based optimization (TLBO) algorithm, the random-keys method is hired in task permutation representation, for the purpose of bridging the gap between them. Subsequently, a special mechanism for handling multiple constraints is developed. In the mechanism, the directions constraint of each task is ensured by the direction check and adjustment. The zoning constraints and the synchronism constraints are satisfied by teasing out the hidden correlations among constraints. The positional constraint is allowed to be violated to some extent in decoding and punished in cost function. Finally, with the TLBO seeking for the global optimum, the variable neighborhood search (VNS) is further hybridized to extend the local search space. The experimental results show that the proposed hybrid algorithm outperforms the late acceptance hill-climbing algorithm (LAHC) for TALB-MC in most cases, especially for large-size problems with multiple constraints, and demonstrates well balance between the exploration and the exploitation. This research proposes an effective and efficient algorithm for solving TALB-MC problem by hybridizing the TLBO and VNS.

  2. Integrated simultaneous analysis of different biomedical data types with exact weighted bi-cluster editing.

    PubMed

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2012-07-17

    The explosion of biological data has largely influenced the focus of today’s biology research. Integrating and analysing large quantity of data to provide meaningful insights has become the main challenge to biologists and bioinformaticians. One major problem is the combined data analysis of data from different types, such as phenotypes and genotypes. This data is modelled as bi-partite graphs where nodes correspond to the different data points, mutations and diseases for instance, and weighted edges relate to associations between them. Bi-clustering is a special case of clustering designed for partitioning two different types of data simultaneously. We present a bi-clustering approach that solves the NP-hard weighted bi-cluster editing problem by transforming a given bi-partite graph into a disjoint union of bi-cliques. Here we contribute with an exact algorithm that is based on fixed-parameter tractability. We evaluated its performance on artificial graphs first. Afterwards we exemplarily applied our Java implementation to data of genome-wide association studies (GWAS) data aiming for discovering new, previously unobserved geno-to-pheno associations. We believe that our results will serve as guidelines for further wet lab investigations. Generally our software can be applied to any kind of data that can be modelled as bi-partite graphs. To our knowledge it is the fastest exact method for weighted bi-cluster editing problem.

  3. Integrated simultaneous analysis of different biomedical data types with exact weighted bi-cluster editing.

    PubMed

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2012-06-01

    The explosion of biological data has largely influenced the focus of today's biology research. Integrating and analysing large quantity of data to provide meaningful insights has become the main challenge to biologists and bioinformaticians. One major problem is the combined data analysis of data from different types, such as phenotypes and genotypes. This data is modelled as bi-partite graphs where nodes correspond to the different data points, mutations and diseases for instance, and weighted edges relate to associations between them. Bi-clustering is a special case of clustering designed for partitioning two different types of data simultaneously. We present a bi-clustering approach that solves the NP-hard weighted bi-cluster editing problem by transforming a given bi-partite graph into a disjoint union of bi-cliques. Here we contribute with an exact algorithm that is based on fixed-parameter tractability. We evaluated its performance on artificial graphs first. Afterwards we exemplarily applied our Java implementation to data of genome-wide association studies (GWAS) data aiming for discovering new, previously unobserved geno-to-pheno associations. We believe that our results will serve as guidelines for further wet lab investigations. Generally our software can be applied to any kind of data that can be modelled as bi-partite graphs. To our knowledge it is the fastest exact method for weighted bi-cluster editing problem.

  4. Non-Evolutionary Algorithms for Scheduling Dependent Tasks in Distributed Heterogeneous Computing Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayne F. Boyer; Gurdeep S. Hura

    2005-09-01

    The Problem of obtaining an optimal matching and scheduling of interdependent tasks in distributed heterogeneous computing (DHC) environments is well known to be an NP-hard problem. In a DHC system, task execution time is dependent on the machine to which it is assigned and task precedence constraints are represented by a directed acyclic graph. Recent research in evolutionary techniques has shown that genetic algorithms usually obtain more efficient schedules that other known algorithms. We propose a non-evolutionary random scheduling (RS) algorithm for efficient matching and scheduling of inter-dependent tasks in a DHC system. RS is a succession of randomized taskmore » orderings and a heuristic mapping from task order to schedule. Randomized task ordering is effectively a topological sort where the outcome may be any possible task order for which the task precedent constraints are maintained. A detailed comparison to existing evolutionary techniques (GA and PSGA) shows the proposed algorithm is less complex than evolutionary techniques, computes schedules in less time, requires less memory and fewer tuning parameters. Simulation results show that the average schedules produced by RS are approximately as efficient as PSGA schedules for all cases studied and clearly more efficient than PSGA for certain cases. The standard formulation for the scheduling problem addressed in this paper is Rm|prec|Cmax.,« less

  5. Influence maximization based on partial network structure information: A comparative analysis on seed selection heuristics

    NASA Astrophysics Data System (ADS)

    Erkol, Şirag; Yücel, Gönenç

    In this study, the problem of seed selection is investigated. This problem is mainly treated as an optimization problem, which is proved to be NP-hard. There are several heuristic approaches in the literature which mostly use algorithmic heuristics. These approaches mainly focus on the trade-off between computational complexity and accuracy. Although the accuracy of algorithmic heuristics are high, they also have high computational complexity. Furthermore, in the literature, it is generally assumed that complete information on the structure and features of a network is available, which is not the case in most of the times. For the study, a simulation model is constructed, which is capable of creating networks, performing seed selection heuristics, and simulating diffusion models. Novel metric-based seed selection heuristics that rely only on partial information are proposed and tested using the simulation model. These heuristics use local information available from nodes in the synthetically created networks. The performances of heuristics are comparatively analyzed on three different network types. The results clearly show that the performance of a heuristic depends on the structure of a network. A heuristic to be used should be selected after investigating the properties of the network at hand. More importantly, the approach of partial information provided promising results. In certain cases, selection heuristics that rely only on partial network information perform very close to similar heuristics that require complete network data.

  6. Diagnosing the pathophysiologic mechanisms of nocturnal polyuria.

    PubMed

    Goessaert, An-Sofie; Krott, Louise; Hoebeke, Piet; Vande Walle, Johan; Everaert, Karel

    2015-02-01

    Diagnosis of nocturnal polyuria (NP) is based on a bladder diary. Addition of a renal function profile (RFP) for analysis of concentrating and solute-conserving capacity allows differentiation of NP pathophysiology and could facilitate individualized treatment. To map circadian rhythms of water and solute diuresis by comparing participants with and without NP. This prospective observational study was carried out in Ghent University Hospital between 2011 and 2013. Participants with and without NP completed a 72-h bladder dairy. RFP, free water clearance (FWC), and creatinine, solute, sodium, and urea clearance were measured for all participants. The study participants were divided into those with (n=77) and those without (n=35) NP. The mean age was 57 yr (SD 16 yr) and 41% of the participants were female. Compared to participants without NP, the NP group exhibited a higher diuresis rate throughout the night (p=0.015); higher FWC (p=0.013) and lower osmolality (p=0.030) at the start of the night; and persistently higher sodium clearance during the night (p<0.001). The pathophysiologic mechanism of NP was identified as water diuresis alone in 22%, sodium diuresis alone in 19%, and a combination of water and sodium diuresis in 47% of the NP group. RFP measurement in first-line NP screening to discriminate between water and solute diuresis as pathophysiologic mechanisms complements the bladder diary and could facilitate optimal individualized treatment of patients with NP. We evaluated eight urine samples collected over 24h to detect the underlying problem in NP. We found that NP can be attributed to water or sodium diuresis or a combination of both. This urinalysis can be used to adapt treatment according to the underlying mechanism in patients with bothersome consequences of NP, such as nocturia and urinary incontinence. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  7. Balancing accuracy, efficiency, and flexibility in a radiative transfer parameterization for dynamical models

    NASA Astrophysics Data System (ADS)

    Pincus, R.; Mlawer, E. J.

    2017-12-01

    Radiation is key process in numerical models of the atmosphere. The problem is well-understood and the parameterization of radiation has seen relatively few conceptual advances in the past 15 years. It is nonthelss often the single most expensive component of all physical parameterizations despite being computed less frequently than other terms. This combination of cost and maturity suggests value in a single radiation parameterization that could be shared across models; devoting effort to a single parameterization might allow for fine tuning for efficiency. The challenge lies in the coupling of this parameterization to many disparate representations of clouds and aerosols. This talk will describe RRTMGP, a new radiation parameterization that seeks to balance efficiency and flexibility. This balance is struck by isolating computational tasks in "kernels" that expose as much fine-grained parallelism as possible. These have simple interfaces and are interoperable across programming languages so that they might be repalced by alternative implementations in domain-specific langauges. Coupling to the host model makes use of object-oriented features of Fortran 2003, minimizing branching within the kernels and the amount of data that must be transferred. We will show accuracy and efficiency results for a globally-representative set of atmospheric profiles using a relatively high-resolution spectral discretization.

  8. An outer approximation method for the road network design problem

    PubMed Central

    2018-01-01

    Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well. PMID:29590111

  9. An outer approximation method for the road network design problem.

    PubMed

    Asadi Bagloee, Saeed; Sarvi, Majid

    2018-01-01

    Best investment in the road infrastructure or the network design is perceived as a fundamental and benchmark problem in transportation. Given a set of candidate road projects with associated costs, finding the best subset with respect to a limited budget is known as a bilevel Discrete Network Design Problem (DNDP) of NP-hard computationally complexity. We engage with the complexity with a hybrid exact-heuristic methodology based on a two-stage relaxation as follows: (i) the bilevel feature is relaxed to a single-level problem by taking the network performance function of the upper level into the user equilibrium traffic assignment problem (UE-TAP) in the lower level as a constraint. It results in a mixed-integer nonlinear programming (MINLP) problem which is then solved using the Outer Approximation (OA) algorithm (ii) we further relax the multi-commodity UE-TAP to a single-commodity MILP problem, that is, the multiple OD pairs are aggregated to a single OD pair. This methodology has two main advantages: (i) the method is proven to be highly efficient to solve the DNDP for a large-sized network of Winnipeg, Canada. The results suggest that within a limited number of iterations (as termination criterion), global optimum solutions are quickly reached in most of the cases; otherwise, good solutions (close to global optimum solutions) are found in early iterations. Comparative analysis of the networks of Gao and Sioux-Falls shows that for such a non-exact method the global optimum solutions are found in fewer iterations than those found in some analytically exact algorithms in the literature. (ii) Integration of the objective function among the constraints provides a commensurate capability to tackle the multi-objective (or multi-criteria) DNDP as well.

  10. Must "Hard Problems" Be Hard?

    ERIC Educational Resources Information Center

    Kolata, Gina

    1985-01-01

    To determine how hard it is for computers to solve problems, researchers have classified groups of problems (polynomial hierarchy) according to how much time they seem to require for their solutions. A difficult and complex proof is offered which shows that a combinatorial approach (using Boolean circuits) may resolve the problem. (JN)

  11. The Computational Complexity of the Kakuro Puzzle, Revisited

    NASA Astrophysics Data System (ADS)

    Ruepp, Oliver; Holzer, Markus

    We present a new proof of NP-completeness for the problem of solving instances of the Japanese pencil puzzle Kakuro (also known as Cross-Sum). While the NP-completeness of Kakuro puzzles has been shown before [T. Seta. The complexity of CROSS SUM. IPSJ SIG Notes, AL-84:51-58, 2002], there are still two interesting aspects to our proof: we show NP-completeness for a new variant of Kakuro that has not been investigated before and thus improves the aforementioned result. Moreover some parts of the proof have been generated automatically, using an interesting technique involving SAT solvers.

  12. Quantum annealing correction with minor embedding

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Albash, Tameem; Paz-Silva, Gerardo; Hen, Itay; Lidar, Daniel A.

    2015-10-01

    Quantum annealing provides a promising route for the development of quantum optimization devices, but the usefulness of such devices will be limited in part by the range of implementable problems as dictated by hardware constraints. To overcome constraints imposed by restricted connectivity between qubits, a larger set of interactions can be approximated using minor embedding techniques whereby several physical qubits are used to represent a single logical qubit. However, minor embedding introduces new types of errors due to its approximate nature. We introduce and study quantum annealing correction schemes designed to improve the performance of quantum annealers in conjunction with minor embedding, thus leading to a hybrid scheme defined over an encoded graph. We argue that this scheme can be efficiently decoded using an energy minimization technique provided the density of errors does not exceed the per-site percolation threshold of the encoded graph. We test the hybrid scheme using a D-Wave Two processor on problems for which the encoded graph is a two-level grid and the Ising model is known to be NP-hard. The problems we consider are frustrated Ising model problem instances with "planted" (a priori known) solutions. Applied in conjunction with optimized energy penalties and decoding techniques, we find that this approach enables the quantum annealer to solve minor embedded instances with significantly higher success probability than it would without error correction. Our work demonstrates that quantum annealing correction can and should be used to improve the robustness of quantum annealing not only for natively embeddable problems but also when minor embedding is used to extend the connectivity of physical devices.

  13. A learning approach to the bandwidth multicolouring problem

    NASA Astrophysics Data System (ADS)

    Akbari Torkestani, Javad

    2016-05-01

    In this article, a generalisation of the vertex colouring problem known as bandwidth multicolouring problem (BMCP), in which a set of colours is assigned to each vertex such that the difference between the colours, assigned to each vertex and its neighbours, is by no means less than a predefined threshold, is considered. It is shown that the proposed method can be applied to solve the bandwidth colouring problem (BCP) as well. BMCP is known to be NP-hard in graph theory, and so a large number of approximation solutions, as well as exact algorithms, have been proposed to solve it. In this article, two learning automata-based approximation algorithms are proposed for estimating a near-optimal solution to the BMCP. We show, for the first proposed algorithm, that by choosing a proper learning rate, the algorithm finds the optimal solution with a probability close enough to unity. Moreover, we compute the worst-case time complexity of the first algorithm for finding a 1/(1-ɛ) optimal solution to the given problem. The main advantage of this method is that a trade-off between the running time of algorithm and the colour set size (colouring optimality) can be made, by a proper choice of the learning rate also. Finally, it is shown that the running time of the proposed algorithm is independent of the graph size, and so it is a scalable algorithm for large graphs. The second proposed algorithm is compared with some well-known colouring algorithms and the results show the efficiency of the proposed algorithm in terms of the colour set size and running time of algorithm.

  14. Characterization of the probabilistic traveling salesman problem.

    PubMed

    Bowler, Neill E; Fink, Thomas M A; Ball, Robin C

    2003-09-01

    We show that stochastic annealing can be successfully applied to gain new results on the probabilistic traveling salesman problem. The probabilistic "traveling salesman" must decide on an a priori order in which to visit n cities (randomly distributed over a unit square) before learning that some cities can be omitted. We find the optimized average length of the pruned tour follows E(L(pruned))=sqrt[np](0.872-0.105p)f(np), where p is the probability of a city needing to be visited, and f(np)-->1 as np--> infinity. The average length of the a priori tour (before omitting any cities) is found to follow E(L(a priori))=sqrt[n/p]beta(p), where beta(p)=1/[1.25-0.82 ln(p)] is measured for 0.05< or =p< or =0.6. Scaling arguments and indirect measurements suggest that beta(p) tends towards a constant for p<0.03. Our stochastic annealing algorithm is based on limited sampling of the pruned tour lengths, exploiting the sampling error to provide the analog of thermal fluctuations in simulated (thermal) annealing. The method has general application to the optimization of functions whose cost to evaluate rises with the precision required.

  15. Exploiting Quantum Resonance to Solve Combinatorial Problems

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Fijany, Amir

    2006-01-01

    Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.

  16. String-like collective atomic motion in the melting and freezing of nanoparticles.

    PubMed

    Zhang, Hao; Kalvapalle, Pranav; Douglas, Jack F

    2011-12-08

    The melting of a solid represents a transition between a solid state in which atoms are localized about fixed average crystal lattice positions to a fluid state that is characterized by relative atomic disorder and particle mobility so that the atoms wander around the material as a whole, impelled by the random thermal impulses of surrounding atoms. Despite the fundamental nature and practical importance of this particle delocalization transition, there is still no fundamental theory of melting and instead one often relies on the semi-phenomenological Lindemann-Gilvarry criterion to estimate roughly the melting point as an instability of the crystal lattice. Even the earliest simulations of melting in hexagonally packed hard discs by Alder and Wainwright indicated the active role of nonlocal collective atomic motions in the melting process, and here we utilize molecular dynamics (MD) simulation to determine whether the collective particle motion observed in melting has a similar geometrical form as those in recent studies of nanoparticle (NP) interfacial dynamics and the molecular dynamics of metastable glass-forming liquids. We indeed find string-like collective atomic motion in NP melting that is remarkably similar in form to the collective interfacial motions in NPs at equilibrium and to the collective motions found in the molecular dynamics of glass-forming liquids. We also find that the spatial localization and extent of string-like motion in the course of NP melting and freezing evolves with time in distinct ways. Specifically, the collective atomic motion propagates from the NP surface and from within the NP in melting and freezing, respectively, and the average string length varies smoothly with time during melting. In contrast, the string-like cooperative motion peaks in an intermediate stage of the freezing process, reflecting a general asymmetry in the dynamics of NP superheating and supercooling. © 2011 American Chemical Society

  17. Distributed mixed-integer fuzzy hierarchical programming for municipal solid waste management. Part I: System identification and methodology development.

    PubMed

    Cheng, Guanhui; Huang, Guohe; Dong, Cong; Xu, Ye; Chen, Xiujuan; Chen, Jiapei

    2017-03-01

    Due to the existence of complexities of heterogeneities, hierarchy, discreteness, and interactions in municipal solid waste management (MSWM) systems such as Beijing, China, a series of socio-economic and eco-environmental problems may emerge or worsen and result in irredeemable damages in the following decades. Meanwhile, existing studies, especially ones focusing on MSWM in Beijing, could hardly reflect these complexities in system simulations and provide reliable decision support for management practices. Thus, a framework of distributed mixed-integer fuzzy hierarchical programming (DMIFHP) is developed in this study for MSWM under these complexities. Beijing is selected as a representative case. The Beijing MSWM system is comprehensively analyzed in many aspects such as socio-economic conditions, natural conditions, spatial heterogeneities, treatment facilities, and system complexities, building a solid foundation for system simulation and optimization. Correspondingly, the MSWM system in Beijing is discretized as 235 grids to reflect spatial heterogeneity. A DMIFHP model which is a nonlinear programming problem is constructed to parameterize the Beijing MSWM system. To enable scientific solving of it, a solution algorithm is proposed based on coupling of fuzzy programming and mixed-integer linear programming. Innovations and advantages of the DMIFHP framework are discussed. The optimal MSWM schemes and mechanism revelations will be discussed in another companion paper due to length limitation.

  18. Summary of Cumulus Parameterization Workshop

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Starr, David OC.; Hou, Arthur; Newman, Paul; Sud, Yogesh

    2002-01-01

    A workshop on cumulus parameterization took place at the NASA Goddard Space Flight Center from December 3-5, 2001. The major objectives of this workshop were (1) to review the problem of representation of moist processes in large-scale models (mesoscale models, Numerical Weather Prediction models and Atmospheric General Circulation Models), (2) to review the state-of-the-art in cumulus parameterization schemes, and (3) to discuss the need for future research and applications. There were a total of 31 presentations and about 100 participants from the United States, Japan, the United Kingdom, France and South Korea. The specific presentations and discussions during the workshop are summarized in this paper.

  19. Model resin composites incorporating ZnO-NP: activity against S. mutans and physicochemical properties characterization

    PubMed Central

    Brandão, Natasha Lamego; Portela, Maristela Barbosa; Maia, Luciane Cople; Antônio, Andréa; Silva, Vanessa Loureiro Moreira e

    2018-01-01

    Abstract Although resin composites are widely used in the clinical practice, the development of recurrent caries at composite-tooth interface still remains as one of the principal shortcomings to be overcome in this field. Objectives To evaluate the activity against S. mutans biofilm of model resin composites incorporating different concentrations of ZnO-nanoparticles (ZnO-NP) and characterize their physicochemical properties. Materials and Methods Different concentrations of ZnO-NP (wt.%): E1=0, E2=0.5, E3=1, E4=2, E5=5 and E6=10 were incorporated into a model resin composite consisting of Bis-GMA-TEGDMA and barium borosilicate particles. The activity against S. mutans biofilm was evaluated by metabolic activity and lactic acid production. The following physicochemical properties were characterized: degree of conversion (DC%), flexural strength (FS), elastic modulus (EM), hardness (KHN), water sorption (Wsp), water solubility (Wsl) and translucency (TP). Results E3, E4, E5 and E6 decreased the biofilm metabolic activity and E5 and E6 decreased the lactic acid production (p<0.05). E6 presented the lowest DC% (p<0.05). No significant difference in FS and EM was found for all resin composites (p>0.05). E5 and E6 presented the lowest values of KHN (p<0.05). E6 presented a higher Wsp than E1 (p<0.05) and the highest Wsl (p<0.05). The translucency significantly decreased as the ZnO- NP concentration increased (p<0.05). Conclusions The incorporation of 2 – 5 wt.% of ZnO-NP could endow antibacterial activity to resin composites, without jeopardizing their physicochemical properties. PMID:29742262

  20. Model resin composites incorporating ZnO-NP: activity against S. mutans and physicochemical properties characterization.

    PubMed

    Brandão, Natasha Lamego; Portela, Maristela Barbosa; Maia, Luciane Cople; Antônio, Andréa; Silva, Vanessa Loureiro Moreira E; Silva, Eduardo Moreira da

    2018-01-01

    Although resin composites are widely used in the clinical practice, the development of recurrent caries at composite-tooth interface still remains as one of the principal shortcomings to be overcome in this field. Objectives To evaluate the activity against S. mutans biofilm of model resin composites incorporating different concentrations of ZnO-nanoparticles (ZnO-NP) and characterize their physicochemical properties. Materials and Methods Different concentrations of ZnO-NP (wt.%): E1=0, E2=0.5, E3=1, E4=2, E5=5 and E6=10 were incorporated into a model resin composite consisting of Bis-GMA-TEGDMA and barium borosilicate particles. The activity against S. mutans biofilm was evaluated by metabolic activity and lactic acid production. The following physicochemical properties were characterized: degree of conversion (DC%), flexural strength (FS), elastic modulus (EM), hardness (KHN), water sorption (Wsp), water solubility (Wsl) and translucency (TP). Results E3, E4, E5 and E6 decreased the biofilm metabolic activity and E5 and E6 decreased the lactic acid production (p<0.05). E6 presented the lowest DC% (p<0.05). No significant difference in FS and EM was found for all resin composites (p>0.05). E5 and E6 presented the lowest values of KHN (p<0.05). E6 presented a higher Wsp than E1 (p<0.05) and the highest Wsl (p<0.05). The translucency significantly decreased as the ZnO- NP concentration increased (p<0.05). Conclusions The incorporation of 2 - 5 wt.% of ZnO-NP could endow antibacterial activity to resin composites, without jeopardizing their physicochemical properties.

  1. Mathematical Problem-Solving Styles in the Education of Deaf and Hard-of-Hearing Individuals

    ERIC Educational Resources Information Center

    Erickson, Elizabeth E. A.

    2012-01-01

    This study explored the mathematical problem-solving styles of middle school and high school deaf and hard-of-hearing students and the mathematical problem-solving styles of the mathematics teachers of middle school and high school deaf and hard-of-hearing students. The research involved 45 deaf and hard-of-hearing students and 19 teachers from a…

  2. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  3. The zonally averaged transport characteristics of the atmosphere as determined by a general circulation model

    NASA Technical Reports Server (NTRS)

    Plumb, R. A.

    1985-01-01

    Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.

  4. Brain Surface Conformal Parameterization Using Riemann Surface Structure

    PubMed Central

    Wang, Yalin; Lui, Lok Ming; Gu, Xianfeng; Hayashi, Kiralee M.; Chan, Tony F.; Toga, Arthur W.; Thompson, Paul M.; Yau, Shing-Tung

    2011-01-01

    In medical imaging, parameterized 3-D surface models are useful for anatomical modeling and visualization, statistical comparisons of anatomy, and surface-based registration and signal processing. Here we introduce a parameterization method based on Riemann surface structure, which uses a special curvilinear net structure (conformal net) to partition the surface into a set of patches that can each be conformally mapped to a parallelogram. The resulting surface subdivision and the parameterizations of the components are intrinsic and stable (their solutions tend to be smooth functions and the boundary conditions of the Dirichlet problem can be enforced). Conformal parameterization also helps transform partial differential equations (PDEs) that may be defined on 3-D brain surface manifolds to modified PDEs on a two-dimensional parameter domain. Since the Jacobian matrix of a conformal parameterization is diagonal, the modified PDE on the parameter domain is readily solved. To illustrate our techniques, we computed parameterizations for several types of anatomical surfaces in 3-D magnetic resonance imaging scans of the brain, including the cerebral cortex, hippocampi, and lateral ventricles. For surfaces that are topologically homeomorphic to each other and have similar geometrical structures, we show that the parameterization results are consistent and the subdivided surfaces can be matched to each other. Finally, we present an automatic sulcal landmark location algorithm by solving PDEs on cortical surfaces. The landmark detection results are used as constraints for building conformal maps between surfaces that also match explicitly defined landmarks. PMID:17679336

  5. Parameterizations for ensemble Kalman inversion

    NASA Astrophysics Data System (ADS)

    Chada, Neil K.; Iglesias, Marco A.; Roininen, Lassi; Stuart, Andrew M.

    2018-05-01

    The use of ensemble methods to solve inverse problems is attractive because it is a derivative-free methodology which is also well-adapted to parallelization. In its basic iterative form the method produces an ensemble of solutions which lie in the linear span of the initial ensemble. Choice of the parameterization of the unknown field is thus a key component of the success of the method. We demonstrate how both geometric ideas and hierarchical ideas can be used to design effective parameterizations for a number of applied inverse problems arising in electrical impedance tomography, groundwater flow and source inversion. In particular we show how geometric ideas, including the level set method, can be used to reconstruct piecewise continuous fields, and we show how hierarchical methods can be used to learn key parameters in continuous fields, such as length-scales, resulting in improved reconstructions. Geometric and hierarchical ideas are combined in the level set method to find piecewise constant reconstructions with interfaces of unknown topology.

  6. Development of spent fuel reprocessing process based on selective sulfurization: Study on the Pu, Np and Am sulfurization

    NASA Astrophysics Data System (ADS)

    Kirishima, Akira; Amano, Yuuki; Nihei, Toshifumi; Mitsugashira, Toshiaki; Sato, Nobuaki

    2010-03-01

    For the recovery of fissile materials from spent nuclear fuel, we have proposed a novel reprocessing process based on selective sulfurization of fission products (FPs). The key concept of this process is utilization of unique chemical property of carbon disulfide (CS2), i.e., it works as a reductant for U3O8 but works as a sulfurizing agent for minor actinides and lanthanides. Sulfurized FPs and minor actinides (MA) are highly soluble to dilute nitric acid while UO2 and PuO2 are hardly soluble, therefore, FPs and MA can be removed from Uranium and Plutonium matrix by selective dissolution. As a feasibility study of this new concept, the sulfurization behaviours of U, Pu, Np, Am and Eu are investigated in this paper by the thermodynamical calculation, phase analysis of chemical analogue elements and tracer experiments.

  7. Grouper: A Compact, Streamable Triangle Mesh Data Structure.

    PubMed

    Luffel, Mark; Gurung, Topraj; Lindstrom, Peter; Rossignac, Jarek

    2013-05-08

    We present Grouper: an all-in-one compact file format, random-access data structure, and streamable representation for large triangle meshes. Similarly to the recently published SQuad representation, Grouper represents the geometry and connectivity of a mesh by grouping vertices and triangles into fixed-size records, most of which store two adjacent triangles and a shared vertex. Unlike SQuad, however, Grouper interleaves geometry with connectivity and uses a new connectivity representation to ensure that vertices and triangles can be stored in a coherent order that enables memory-efficient sequential stream processing. We present a linear-time construction algorithm that allows streaming out Grouper meshes using a small memory footprint while preserving the initial ordering of vertices. As part of this construction, we show how the problem of assigning vertices and triangles to groups reduces to a well-known NP-hard optimization problem, and present a simple yet effective heuristic solution that performs well in practice. Our array-based Grouper representation also doubles as a triangle mesh data structure that allows direct access to vertices and triangles. Storing only about two integer references per triangle, Grouper answers both incidence and adjacency queries in amortized constant time. Our compact representation enables data-parallel processing on multicore computers, instant partitioning and fast transmission for distributed processing, as well as efficient out-of-core access.

  8. A discrete mechanics approach to dislocation dynamics in BCC crystals

    NASA Astrophysics Data System (ADS)

    Ramasubramaniam, A.; Ariza, M. P.; Ortiz, M.

    2007-03-01

    A discrete mechanics approach to modeling the dynamics of dislocations in BCC single crystals is presented. Ideas are borrowed from discrete differential calculus and algebraic topology and suitably adapted to crystal lattices. In particular, the extension of a crystal lattice to a CW complex allows for convenient manipulation of forms and fields defined over the crystal. Dislocations are treated within the theory as energy-minimizing structures that lead to locally lattice-invariant but globally incompatible eigendeformations. The discrete nature of the theory eliminates the need for regularization of the core singularity and inherently allows for dislocation reactions and complicated topological transitions. The quantization of slip to integer multiples of the Burgers' vector leads to a large integer optimization problem. A novel approach to solving this NP-hard problem based on considerations of metastability is proposed. A numerical example that applies the method to study the emanation of dislocation loops from a point source of dilatation in a large BCC crystal is presented. The structure and energetics of BCC screw dislocation cores, as obtained via the present formulation, are also considered and shown to be in good agreement with available atomistic studies. The method thus provides a realistic avenue for mesoscale simulations of dislocation based crystal plasticity with fully atomistic resolution.

  9. Layout compliance for triple patterning lithography: an iterative approach

    NASA Astrophysics Data System (ADS)

    Yu, Bei; Garreton, Gilda; Pan, David Z.

    2014-10-01

    As the semiconductor process further scales down, the industry encounters many lithography-related issues. In the 14nm logic node and beyond, triple patterning lithography (TPL) is one of the most promising techniques for Metal1 layer and possibly Via0 layer. As one of the most challenging problems in TPL, recently layout decomposition efforts have received more attention from both industry and academia. Ideally the decomposer should point out locations in the layout that are not triple patterning decomposable and therefore manual intervention by designers is required. A traditional decomposition flow would be an iterative process, where each iteration consists of an automatic layout decomposition step and manual layout modification task. However, due to the NP-hardness of triple patterning layout decomposition, automatic full chip level layout decomposition requires long computational time and therefore design closure issues continue to linger around in the traditional flow. Challenged by this issue, we present a novel incremental layout decomposition framework to facilitate accelerated iterative decomposition. In the first iteration, our decomposer not only points out all conflicts, but also provides the suggestions to fix them. After the layout modification, instead of solving the full chip problem from scratch, our decomposer can provide a quick solution for a selected portion of layout. We believe this framework is efficient, in terms of performance and designer friendly.

  10. MANGO: a new approach to multiple sequence alignment.

    PubMed

    Zhang, Zefeng; Lin, Hao; Li, Ming

    2007-01-01

    Multiple sequence alignment is a classical and challenging task for biological sequence analysis. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state of the art multiple sequence alignment programs suffer from the 'once a gap, always a gap' phenomenon. Is there a radically new way to do multiple sequence alignment? This paper introduces a novel and orthogonal multiple sequence alignment method, using multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds are provably significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks showing that MANGO compares favorably, in both accuracy and speed, against state-of-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, Prob-ConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0 and Kalign 2.0.

  11. Applying graph partitioning methods in measurement-based dynamic load balancing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatele, Abhinav; Fourestier, Sebastien; Menon, Harshitha

    Load imbalance leads to an increasing waste of resources as an application is scaled to more and more processors. Achieving the best parallel efficiency for a program requires optimal load balancing which is a NP-hard problem. However, finding near-optimal solutions to this problem for complex computational science and engineering applications is becoming increasingly important. Charm++, a migratable objects based programming model, provides a measurement-based dynamic load balancing framework. This framework instruments and then migrates over-decomposed objects to balance computational load and communication at runtime. This paper explores the use of graph partitioning algorithms, traditionally used for partitioning physical domains/meshes, formore » measurement-based dynamic load balancing of parallel applications. In particular, we present repartitioning methods developed in a graph partitioning toolbox called SCOTCH that consider the previous mapping to minimize migration costs. We also discuss a new imbalance reduction algorithm for graphs with irregular load distributions. We compare several load balancing algorithms using microbenchmarks on Intrepid and Ranger and evaluate the effect of communication, number of cores and number of objects on the benefit achieved from load balancing. New algorithms developed in SCOTCH lead to better performance compared to the METIS partitioners for several cases, both in terms of the application execution time and fewer number of objects migrated.« less

  12. Energy-landscape paving for prediction of face-centered-cubic hydrophobic-hydrophilic lattice model proteins

    NASA Astrophysics Data System (ADS)

    Liu, Jingfa; Song, Beibei; Liu, Zhaoxia; Huang, Weibo; Sun, Yuanyuan; Liu, Wenjie

    2013-11-01

    Protein structure prediction (PSP) is a classical NP-hard problem in computational biology. The energy-landscape paving (ELP) method is a class of heuristic global optimization algorithm, and has been successfully applied to solving many optimization problems with complex energy landscapes in the continuous space. By putting forward a new update mechanism of the histogram function in ELP and incorporating the generation of initial conformation based on the greedy strategy and the neighborhood search strategy based on pull moves into ELP, an improved energy-landscape paving (ELP+) method is put forward. Twelve general benchmark instances are first tested on both two-dimensional and three-dimensional (3D) face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice models. The lowest energies by ELP+ are as good as or better than those of other methods in the literature for all instances. Then, five sets of larger-scale instances, denoted by S, R, F90, F180, and CASP target instances on the 3D FCC HP lattice model are tested. The proposed algorithm finds lower energies than those by the five other methods in literature. Not unexpectedly, this is particularly pronounced for the longer sequences considered. Computational results show that ELP+ is an effective method for PSP on the fcc HP lattice model.

  13. A Bee Evolutionary Guiding Nondominated Sorting Genetic Algorithm II for Multiobjective Flexible Job-Shop Scheduling.

    PubMed

    Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua

    2017-01-01

    Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N , in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.

  14. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    PubMed

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  15. A Bee Evolutionary Guiding Nondominated Sorting Genetic Algorithm II for Multiobjective Flexible Job-Shop Scheduling

    PubMed Central

    Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua

    2017-01-01

    Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687

  16. Divergent Mechanisms for Passive Pneumococcal Resistance to β-Lactam Antibiotics in the Presence of Haemophilus influenzae

    PubMed Central

    Weimer, Kristin E.D.; Juneau, Richard A.; Murrah, Kyle A.; Pang, Bing; Armbruster, Chelsie E.; Richardson, Stephen H.

    2011-01-01

    Background. Otitis media, for which antibiotic treatment failure is increasingly common, is a leading pediatric public health problem. Methods. In vitro and in vivo studies using the chinchilla model of otitis media were performed using a β-lactamase-producing strain of nontypeable Haemophilus influenzae (NTHi 86-028NP) and an isogenic mutant deficient in β-lactamase production (NTHi 86-028NP bla) to define the roles of biofilm formation and β-lactamase production in antibiotic resistance. Coinfection studies were done with Streptococcus pneumoniae to determine if NTHi provides passive protection by means of β-lactamase production, biofilm formation, or both. Results. NTHi 86-028NP bla was resistant to amoxicillin killing in biofilm studies in vitro; however, it was cleared by amoxicillin treatment in vivo, whereas NTHi 86-028NP was unaffected in either system. NTHi 86-028NP protected pneumococcus in vivo in both the effusion fluid and bullar homogenate. NTHi 86-028NP bla and pneumococcus were both recovered from the surface-associated bacteria of amoxicillin-treated animals; only NTHi 86-028NP bla was recovered from effusion. Conclusions. Based on these studies, we conclude that NTHi provides passive protection for S. pneumoniae in vivo through 2 distinct mechanisms: production of β-lactamase and formation of biofilm communities. PMID:21220774

  17. Divergent mechanisms for passive pneumococcal resistance to β-lactam antibiotics in the presence of Haemophilus influenzae.

    PubMed

    Weimer, Kristin E D; Juneau, Richard A; Murrah, Kyle A; Pang, Bing; Armbruster, Chelsie E; Richardson, Stephen H; Swords, W Edward

    2011-02-15

    Otitis media, for which antibiotic treatment failure is increasingly common, is a leading pediatric public health problem. In vitro and in vivo studies using the chinchilla model of otitis media were performed using a β-lactamase-producing strain of nontypeable Haemophilus influenzae (NTHi 86-028NP) and an isogenic mutant deficient in β-lactamase production (NTHi 86-028NP bla) to define the roles of biofilm formation and β-lactamase production in antibiotic resistance. Coinfection studies were done with Streptococcus pneumoniae to determine if NTHi provides passive protection by means of β-lactamase production, biofilm formation, or both. NTHi 86-028NP bla was resistant to amoxicillin killing in biofilm studies in vitro; however, it was cleared by amoxicillin treatment in vivo, whereas NTHi 86-028NP was unaffected in either system. NTHi 86-028NP protected pneumococcus in vivo in both the effusion fluid and bullar homogenate. NTHi 86-028NP bla and pneumococcus were both recovered from the surface-associated bacteria of amoxicillin-treated animals; only NTHi 86-028NP bla was recovered from effusion. Based on these studies, we conclude that NTHi provides passive protection for S. pneumoniae in vivo through 2 distinct mechanisms: production of β-lactamase and formation of biofilm communities.

  18. The Role of the Goal in Solving Hard Computational Problems: Do People Really Optimize?

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Stege, Ulrike; Masson, Michael E. J.

    2018-01-01

    The role that the mental, or internal, representation plays when people are solving hard computational problems has largely been overlooked to date, despite the reality that this internal representation drives problem solving. In this work we investigate how performance on versions of two hard computational problems differs based on what internal…

  19. A Fast and Scalable Algorithm for Calculating the Achievable Capacity of a Wireless Mesh Network

    DTIC Science & Technology

    2016-05-09

    an optimal wireless transmission schedule for a predetermined set of links without the addition of routing is NP-Hard [5]. We effectively bypass the... wireless communications have used omni-directional antennas, where a user’s transmission inter- feres with others users in all directions. Different...interference from some particular transmission . Hence, δ = ∆(Ḡc) = max(i,j)∈E |Fij |. IV. ALGORITHM FOR RAPIDLY DETERMINING WIRELESS NETWORK CAPACITY In

  20. Linking search space structure, run-time dynamics, and problem difficulty : a step toward demystifying tabu search.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitley, L. Darrell; Howe, Adele E.; Watson, Jean-Paul

    2004-09-01

    Tabu search is one of the most effective heuristics for locating high-quality solutions to a diverse array of NP-hard combinatorial optimization problems. Despite the widespread success of tabu search, researchers have a poor understanding of many key theoretical aspects of this algorithm, including models of the high-level run-time dynamics and identification of those search space features that influence problem difficulty. We consider these questions in the context of the job-shop scheduling problem (JSP), a domain where tabu search algorithms have been shown to be remarkably effective. Previously, we demonstrated that the mean distance between random local optima and the nearestmore » optimal solution is highly correlated with problem difficulty for a well-known tabu search algorithm for the JSP introduced by Taillard. In this paper, we discuss various shortcomings of this measure and develop a new model of problem difficulty that corrects these deficiencies. We show that Taillard's algorithm can be modeled with high fidelity as a simple variant of a straightforward random walk. The random walk model accounts for nearly all of the variability in the cost required to locate both optimal and sub-optimal solutions to random JSPs, and provides an explanation for differences in the difficulty of random versus structured JSPs. Finally, we discuss and empirically substantiate two novel predictions regarding tabu search algorithm behavior. First, the method for constructing the initial solution is highly unlikely to impact the performance of tabu search. Second, tabu tenure should be selected to be as small as possible while simultaneously avoiding search stagnation; values larger than necessary lead to significant degradations in performance.« less

Top