Expected Fitness Gains of Randomized Search Heuristics for the Traveling Salesperson Problem.
Nallaperuma, Samadhi; Neumann, Frank; Sudholt, Dirk
2017-01-01
Randomized search heuristics are frequently applied to NP-hard combinatorial optimization problems. The runtime analysis of randomized search heuristics has contributed tremendously to our theoretical understanding. Recently, randomized search heuristics have been examined regarding their achievable progress within a fixed-time budget. We follow this approach and present a fixed-budget analysis for an NP-hard combinatorial optimization problem. We consider the well-known Traveling Salesperson Problem (TSP) and analyze the fitness increase that randomized search heuristics are able to achieve within a given fixed-time budget. In particular, we analyze Manhattan and Euclidean TSP instances and Randomized Local Search (RLS), (1+1) EA and (1+[Formula: see text]) EA algorithms for the TSP in a smoothed complexity setting, and derive the lower bounds of the expected fitness gain for a specified number of generations.
NASA Astrophysics Data System (ADS)
Zheng, Genrang; Lin, ZhengChun
The problem of winner determination in combinatorial auctions is a hotspot electronic business, and a NP hard problem. A Hybrid Artificial Fish Swarm Algorithm(HAFSA), which is combined with First Suite Heuristic Algorithm (FSHA) and Artificial Fish Swarm Algorithm (AFSA), is proposed to solve the problem after probing it base on the theories of AFSA. Experiment results show that the HAFSA is a rapidly and efficient algorithm for The problem of winner determining. Compared with Ant colony Optimization Algorithm, it has a good performance with broad and prosperous application.
Genetic algorithms for the vehicle routing problem
NASA Astrophysics Data System (ADS)
Volna, Eva
2016-06-01
The Vehicle Routing Problem (VRP) is one of the most challenging combinatorial optimization tasks. This problem consists in designing the optimal set of routes for fleet of vehicles in order to serve a given set of customers. Evolutionary algorithms are general iterative algorithms for combinatorial optimization. These algorithms have been found to be very effective and robust in solving numerous problems from a wide range of application domains. This problem is known to be NP-hard; hence many heuristic procedures for its solution have been suggested. For such problems it is often desirable to obtain approximate solutions, so they can be found fast enough and are sufficiently accurate for the purpose. In this paper we have performed an experimental study that indicates the suitable use of genetic algorithms for the vehicle routing problem.
Statistical physics of hard combinatorial optimization: Vertex cover problem
NASA Astrophysics Data System (ADS)
Zhao, Jin-Hua; Zhou, Hai-Jun
2014-07-01
Typical-case computation complexity is a research topic at the boundary of computer science, applied mathematics, and statistical physics. In the last twenty years, the replica-symmetry-breaking mean field theory of spin glasses and the associated message-passing algorithms have greatly deepened our understanding of typical-case computation complexity. In this paper, we use the vertex cover problem, a basic nondeterministic-polynomial (NP)-complete combinatorial optimization problem of wide application, as an example to introduce the statistical physical methods and algorithms. We do not go into the technical details but emphasize mainly the intuitive physical meanings of the message-passing equations. A nonfamiliar reader shall be able to understand to a large extent the physics behind the mean field approaches and to adjust the mean field methods in solving other optimization problems.
A comparison of approaches for finding minimum identifying codes on graphs
NASA Astrophysics Data System (ADS)
Horan, Victoria; Adachi, Steve; Bak, Stanley
2016-05-01
In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.
Lexicographic goal programming and assessment tools for a combinatorial production problem.
DOT National Transportation Integrated Search
2008-01-01
NP-complete combinatorial problems often necessitate the use of near-optimal solution techniques including : heuristics and metaheuristics. The addition of multiple optimization criteria can further complicate : comparison of these solution technique...
Nash Social Welfare in Multiagent Resource Allocation
NASA Astrophysics Data System (ADS)
Ramezani, Sara; Endriss, Ulle
We study different aspects of the multiagent resource allocation problem when the objective is to find an allocation that maximizes Nash social welfare, the product of the utilities of the individual agents. The Nash solution is an important welfare criterion that combines efficiency and fairness considerations. We show that the problem of finding an optimal outcome is NP-hard for a number of different languages for representing agent preferences; we establish new results regarding convergence to Nash-optimal outcomes in a distributed negotiation framework; and we design and test algorithms similar to those applied in combinatorial auctions for computing such an outcome directly.
NASA Astrophysics Data System (ADS)
Doerr, Timothy; Alves, Gelio; Yu, Yi-Kuo
2006-03-01
Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time. This suggests a way to efficiently find approximate solutions - - find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the fininte number of high- ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks - - peptide sequencing using tandem mass spectrometry data.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.
Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.
Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem
Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.
2017-01-01
The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849
Number Partitioning via Quantum Adiabatic Computation
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadim N.; Toussaint, Udo
2002-01-01
We study both analytically and numerically the complexity of the adiabatic quantum evolution algorithm applied to random instances of combinatorial optimization problems. We use as an example the NP-complete set partition problem and obtain an asymptotic expression for the minimal gap separating the ground and exited states of a system during the execution of the algorithm. We show that for computationally hard problem instances the size of the minimal gap scales exponentially with the problem size. This result is in qualitative agreement with the direct numerical simulation of the algorithm for small instances of the set partition problem. We describe the statistical properties of the optimization problem that are responsible for the exponential behavior of the algorithm.
Solving the Container Stowage Problem (CSP) using Particle Swarm Optimization (PSO)
NASA Astrophysics Data System (ADS)
Matsaini; Santosa, Budi
2018-04-01
Container Stowage Problem (CSP) is a problem of containers arrangement into ships by considering rules such as: total weight, weight of one stack, destination, equilibrium, and placement of containers on vessel. Container stowage problem is combinatorial problem and hard to solve with enumeration technique. It is an NP-Hard Problem. Therefore, to find a solution, metaheuristics is preferred. The objective of solving the problem is to minimize the amount of shifting such that the unloading time is minimized. Particle Swarm Optimization (PSO) is proposed to solve the problem. The implementation of PSO is combined with some steps which are stack position change rules, stack changes based on destination, and stack changes based on the weight type of the stacks (light, medium, and heavy). The proposed method was applied on five different cases. The results were compared to Bee Swarm Optimization (BSO) and heuristics method. PSO provided mean of 0.87% gap and time gap of 60 second. While BSO provided mean of 2,98% gap and 459,6 second to the heuristcs.
Martín H., José Antonio
2013-01-01
Many practical problems in almost all scientific and technological disciplines have been classified as computationally hard (NP-hard or even NP-complete). In life sciences, combinatorial optimization problems frequently arise in molecular biology, e.g., genome sequencing; global alignment of multiple genomes; identifying siblings or discovery of dysregulated pathways. In almost all of these problems, there is the need for proving a hypothesis about certain property of an object that can be present if and only if it adopts some particular admissible structure (an NP-certificate) or be absent (no admissible structure), however, none of the standard approaches can discard the hypothesis when no solution can be found, since none can provide a proof that there is no admissible structure. This article presents an algorithm that introduces a novel type of solution method to “efficiently” solve the graph 3-coloring problem; an NP-complete problem. The proposed method provides certificates (proofs) in both cases: present or absent, so it is possible to accept or reject the hypothesis on the basis of a rigorous proof. It provides exact solutions and is polynomial-time (i.e., efficient) however parametric. The only requirement is sufficient computational power, which is controlled by the parameter . Nevertheless, here it is proved that the probability of requiring a value of to obtain a solution for a random graph decreases exponentially: , making tractable almost all problem instances. Thorough experimental analyses were performed. The algorithm was tested on random graphs, planar graphs and 4-regular planar graphs. The obtained experimental results are in accordance with the theoretical expected results. PMID:23349711
Exploiting Quantum Resonance to Solve Combinatorial Problems
NASA Technical Reports Server (NTRS)
Zak, Michail; Fijany, Amir
2006-01-01
Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.
Multiple-variable neighbourhood search for the single-machine total weighted tardiness problem
NASA Astrophysics Data System (ADS)
Chung, Tsui-Ping; Fu, Qunjie; Liao, Ching-Jong; Liu, Yi-Ting
2017-07-01
The single-machine total weighted tardiness (SMTWT) problem is a typical discrete combinatorial optimization problem in the scheduling literature. This problem has been proved to be NP hard and thus provides a challenging area for metaheuristics, especially the variable neighbourhood search algorithm. In this article, a multiple variable neighbourhood search (m-VNS) algorithm with multiple neighbourhood structures is proposed to solve the problem. Special mechanisms named matching and strengthening operations are employed in the algorithm, which has an auto-revising local search procedure to explore the solution space beyond local optimality. Two aspects, searching direction and searching depth, are considered, and neighbourhood structures are systematically exchanged. Experimental results show that the proposed m-VNS algorithm outperforms all the compared algorithms in solving the SMTWT problem.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
An Integrated Method Based on PSO and EDA for the Max-Cut Problem.
Lin, Geng; Guan, Jian
2016-01-01
The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.
Solving optimization problems by the public goods game
NASA Astrophysics Data System (ADS)
Javarone, Marco Alberto
2017-09-01
We introduce a method based on the Public Goods Game for solving optimization tasks. In particular, we focus on the Traveling Salesman Problem, i.e. a NP-hard problem whose search space exponentially grows increasing the number of cities. The proposed method considers a population whose agents are provided with a random solution to the given problem. In doing so, agents interact by playing the Public Goods Game using the fitness of their solution as currency of the game. Notably, agents with better solutions provide higher contributions, while those with lower ones tend to imitate the solution of richer agents for increasing their fitness. Numerical simulations show that the proposed method allows to compute exact solutions, and suboptimal ones, in the considered search spaces. As result, beyond to propose a new heuristic for combinatorial optimization problems, our work aims to highlight the potentiality of evolutionary game theory beyond its current horizons.
Pourhassan, Mojgan; Neumann, Frank
2018-06-22
The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.
The checkpoint ordering problem
Hungerländer, P.
2017-01-01
Abstract We suggest a new variant of a row layout problem: Find an ordering of n departments with given lengths such that the total weighted sum of their distances to a given checkpoint is minimized. The Checkpoint Ordering Problem (COP) is both of theoretical and practical interest. It has several applications and is conceptually related to some well-studied combinatorial optimization problems, namely the Single-Row Facility Layout Problem, the Linear Ordering Problem and a variant of parallel machine scheduling. In this paper we study the complexity of the (COP) and its special cases. The general version of the (COP) with an arbitrary but fixed number of checkpoints is NP-hard in the weak sense. We propose both a dynamic programming algorithm and an integer linear programming approach for the (COP) . Our computational experiments indicate that the (COP) is hard to solve in practice. While the run time of the dynamic programming algorithm strongly depends on the length of the departments, the integer linear programming approach is able to solve instances with up to 25 departments to optimality. PMID:29170574
Solving NP-Hard Problems with Physarum-Based Ant Colony System.
Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li
2017-01-01
NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.
Global Optimal Trajectory in Chaos and NP-Hardness
NASA Astrophysics Data System (ADS)
Latorre, Vittorio; Gao, David Yang
This paper presents an unconventional theory and method for solving general nonlinear dynamical systems. Instead of the direct iterative methods, the discretized nonlinear system is first formulated as a global optimization problem via the least squares method. A newly developed canonical duality theory shows that this nonconvex minimization problem can be solved deterministically in polynomial time if a global optimality condition is satisfied. The so-called pseudo-chaos produced by linear iterative methods are mainly due to the intrinsic numerical error accumulations. Otherwise, the global optimization problem could be NP-hard and the nonlinear system can be really chaotic. A conjecture is proposed, which reveals the connection between chaos in nonlinear dynamics and NP-hardness in computer science. The methodology and the conjecture are verified by applications to the well-known logistic equation, a forced memristive circuit and the Lorenz system. Computational results show that the canonical duality theory can be used to identify chaotic systems and to obtain realistic global optimal solutions in nonlinear dynamical systems. The method and results presented in this paper should bring some new insights into nonlinear dynamical systems and NP-hardness in computational complexity theory.
Statistical Mechanics of Combinatorial Auctions
NASA Astrophysics Data System (ADS)
Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo
2006-09-01
Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.
A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems
NASA Astrophysics Data System (ADS)
Thammano, Arit; Teekeng, Wannaporn
2015-05-01
The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.
Constant Communities in Complex Networks
NASA Astrophysics Data System (ADS)
Chakraborty, Tanmoy; Srinivasan, Sriram; Ganguly, Niloy; Bhowmick, Sanjukta; Mukherjee, Animesh
2013-05-01
Identifying community structure is a fundamental problem in network analysis. Most community detection algorithms are based on optimizing a combinatorial parameter, for example modularity. This optimization is generally NP-hard, thus merely changing the vertex order can alter their assignments to the community. However, there has been less study on how vertex ordering influences the results of the community detection algorithms. Here we identify and study the properties of invariant groups of vertices (constant communities) whose assignment to communities are, quite remarkably, not affected by vertex ordering. The percentage of constant communities can vary across different applications and based on empirical results we propose metrics to evaluate these communities. Using constant communities as a pre-processing step, one can significantly reduce the variation of the results. Finally, we present a case study on phoneme network and illustrate that constant communities, quite strikingly, form the core functional units of the larger communities.
NASA Astrophysics Data System (ADS)
Doerr, Timothy P.; Alves, Gelio; Yu, Yi-Kuo
2005-08-01
Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time using the transfer matrix technique or, equivalently, the dynamic programming approach. This suggests a way to efficiently find approximate solutions-find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of the kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the finite number of high-ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks-peptide sequencing using tandem mass spectrometry data. For directed paths in random media, the scaling function depends on the particular realization of randomness; in the mass spectrometry case, the scaling function is spectrum-specific.
Using Ant Colony Optimization for Routing in VLSI Chips
NASA Astrophysics Data System (ADS)
Arora, Tamanna; Moses, Melanie
2009-04-01
Rapid advances in VLSI technology have increased the number of transistors that fit on a single chip to about two billion. A frequent problem in the design of such high performance and high density VLSI layouts is that of routing wires that connect such large numbers of components. Most wire-routing problems are computationally hard. The quality of any routing algorithm is judged by the extent to which it satisfies routing constraints and design objectives. Some of the broader design objectives include minimizing total routed wire length, and minimizing total capacitance induced in the chip, both of which serve to minimize power consumed by the chip. Ant Colony Optimization algorithms (ACO) provide a multi-agent framework for combinatorial optimization by combining memory, stochastic decision and strategies of collective and distributed learning by ant-like agents. This paper applies ACO to the NP-hard problem of finding optimal routes for interconnect routing on VLSI chips. The constraints on interconnect routing are used by ants as heuristics which guide their search process. We found that ACO algorithms were able to successfully incorporate multiple constraints and route interconnects on suite of benchmark chips. On an average, the algorithm routed with total wire length 5.5% less than other established routing algorithms.
Causal gene identification using combinatorial V-structure search.
Cai, Ruichu; Zhang, Zhenjie; Hao, Zhifeng
2013-07-01
With the advances of biomedical techniques in the last decade, the costs of human genomic sequencing and genomic activity monitoring are coming down rapidly. To support the huge genome-based business in the near future, researchers are eager to find killer applications based on human genome information. Causal gene identification is one of the most promising applications, which may help the potential patients to estimate the risk of certain genetic diseases and locate the target gene for further genetic therapy. Unfortunately, existing pattern recognition techniques, such as Bayesian networks, cannot be directly applied to find the accurate causal relationship between genes and diseases. This is mainly due to the insufficient number of samples and the extremely high dimensionality of the gene space. In this paper, we present the first practical solution to causal gene identification, utilizing a new combinatorial formulation over V-Structures commonly used in conventional Bayesian networks, by exploring the combinations of significant V-Structures. We prove the NP-hardness of the combinatorial search problem under a general settings on the significance measure on the V-Structures, and present a greedy algorithm to find sub-optimal results. Extensive experiments show that our proposal is both scalable and effective, particularly with interesting findings on the causal genes over real human genome data. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kasiviswanathan, Shiva Prasad; Pan, Feng
In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove a set of k matrix columns that minimizes in the residual matrix the sum of the row values, where the value of a row is defined to be the largest entry in that row. This combinatorial problem is closely related to bipartite network interdiction problem that can be applied to minimize the probability that an adversary can successfully smuggle weapons. After introducing the matrix interdiction problem, we study the computational complexity of this problem. We show that the matrix interdiction problem is NP-hard and that there exists a constant γ such that it is even NP-hard to approximate this problem within an n γ additive factor. We also present an algorithm for this problem that achieves an (n - k) multiplicative approximation ratio.
NASA Astrophysics Data System (ADS)
Dao, Son Duy; Abhary, Kazem; Marian, Romeo
2017-06-01
Integration of production planning and scheduling is a class of problems commonly found in manufacturing industry. This class of problems associated with precedence constraint has been previously modeled and optimized by the authors, in which, it requires a multidimensional optimization at the same time: what to make, how many to make, where to make and the order to make. It is a combinatorial, NP-hard problem, for which no polynomial time algorithm is known to produce an optimal result on a random graph. In this paper, the further development of Genetic Algorithm (GA) for this integrated optimization is presented. Because of the dynamic nature of the problem, the size of its solution is variable. To deal with this variability and find an optimal solution to the problem, GA with new features in chromosome encoding, crossover, mutation, selection as well as algorithm structure is developed herein. With the proposed structure, the proposed GA is able to "learn" from its experience. Robustness of the proposed GA is demonstrated by a complex numerical example in which performance of the proposed GA is compared with those of three commercial optimization solvers.
Aghamohammadi, Hossein; Saadi Mesgari, Mohammad; Molaei, Damoon; Aghamohammadi, Hasan
2013-01-01
Location-allocation is a combinatorial optimization problem, and is defined as Non deterministic Polynomial Hard (NP) hard optimization. Therefore, solution of such a problem should be shifted from exact to heuristic or Meta heuristic due to the complexity of the problem. Locating medical centers and allocating injuries of an earthquake to them has high importance in earthquake disaster management so that developing a proper method will reduce the time of relief operation and will consequently decrease the number of fatalities. This paper presents the development of a heuristic method based on two nested genetic algorithms to optimize this location allocation problem by using the abilities of Geographic Information System (GIS). In the proposed method, outer genetic algorithm is applied to the location part of the problem and inner genetic algorithm is used to optimize the resource allocation. The final outcome of implemented method includes the spatial location of new required medical centers. The method also calculates that how many of the injuries at each demanding point should be taken to any of the existing and new medical centers as well. The results of proposed method showed high performance of designed structure to solve a capacitated location-allocation problem that may arise in a disaster situation when injured people has to be taken to medical centers in a reasonable time.
Parameterized Algorithmics for Finding Exact Solutions of NP-Hard Biological Problems.
Hüffner, Falk; Komusiewicz, Christian; Niedermeier, Rolf; Wernicke, Sebastian
2017-01-01
Fixed-parameter algorithms are designed to efficiently find optimal solutions to some computationally hard (NP-hard) problems by identifying and exploiting "small" problem-specific parameters. We survey practical techniques to develop such algorithms. Each technique is introduced and supported by case studies of applications to biological problems, with additional pointers to experimental results.
Solving Set Cover with Pairs Problem using Quantum Annealing
NASA Astrophysics Data System (ADS)
Cao, Yudong; Jiang, Shuxian; Perouli, Debbie; Kais, Sabre
2016-09-01
Here we consider using quantum annealing to solve Set Cover with Pairs (SCP), an NP-hard combinatorial optimization problem that plays an important role in networking, computational biology, and biochemistry. We show an explicit construction of Ising Hamiltonians whose ground states encode the solution of SCP instances. We numerically simulate the time-dependent Schrödinger equation in order to test the performance of quantum annealing for random instances and compare with that of simulated annealing. We also discuss explicit embedding strategies for realizing our Hamiltonian construction on the D-wave type restricted Ising Hamiltonian based on Chimera graphs. Our embedding on the Chimera graph preserves the structure of the original SCP instance and in particular, the embedding for general complete bipartite graphs and logical disjunctions may be of broader use than that the specific problem we deal with.
Ant colony optimization for solving university facility layout problem
NASA Astrophysics Data System (ADS)
Mohd Jani, Nurul Hafiza; Mohd Radzi, Nor Haizan; Ngadiman, Mohd Salihin
2013-04-01
Quadratic Assignment Problems (QAP) is classified as the NP hard problem. It has been used to model a lot of problem in several areas such as operational research, combinatorial data analysis and also parallel and distributed computing, optimization problem such as graph portioning and Travel Salesman Problem (TSP). In the literature, researcher use exact algorithm, heuristics algorithm and metaheuristic approaches to solve QAP problem. QAP is largely applied in facility layout problem (FLP). In this paper we used QAP to model university facility layout problem. There are 8 facilities that need to be assigned to 8 locations. Hence we have modeled a QAP problem with n ≤ 10 and developed an Ant Colony Optimization (ACO) algorithm to solve the university facility layout problem. The objective is to assign n facilities to n locations such that the minimum product of flows and distances is obtained. Flow is the movement from one to another facility, whereas distance is the distance between one locations of a facility to other facilities locations. The objective of the QAP is to obtain minimum total walking (flow) of lecturers from one destination to another (distance).
Chen, Xin; Wu, Qiong; Sun, Ruimin; Zhang, Louxin
2012-01-01
The discovery of single-nucleotide polymorphisms (SNPs) has important implications in a variety of genetic studies on human diseases and biological functions. One valuable approach proposed for SNP discovery is based on base-specific cleavage and mass spectrometry. However, it is still very challenging to achieve the full potential of this SNP discovery approach. In this study, we formulate two new combinatorial optimization problems. While both problems are aimed at reconstructing the sample sequence that would attain the minimum number of SNPs, they search over different candidate sequence spaces. The first problem, denoted as SNP - MSP, limits its search to sequences whose in silico predicted mass spectra have all their signals contained in the measured mass spectra. In contrast, the second problem, denoted as SNP - MSQ, limits its search to sequences whose in silico predicted mass spectra instead contain all the signals of the measured mass spectra. We present an exact dynamic programming algorithm for solving the SNP - MSP problem and also show that the SNP - MSQ problem is NP-hard by a reduction from a restricted variation of the 3-partition problem. We believe that an efficient solution to either problem above could offer a seamless integration of information in four complementary base-specific cleavage reactions, thereby improving the capability of the underlying biotechnology for sensitive and accurate SNP discovery.
Guturu, Parthasarathy; Dantu, Ram
2008-06-01
Many graph- and set-theoretic problems, because of their tremendous application potential and theoretical appeal, have been well investigated by the researchers in complexity theory and were found to be NP-hard. Since the combinatorial complexity of these problems does not permit exhaustive searches for optimal solutions, only near-optimal solutions can be explored using either various problem-specific heuristic strategies or metaheuristic global-optimization methods, such as simulated annealing, genetic algorithms, etc. In this paper, we propose a unified evolutionary algorithm (EA) to the problems of maximum clique finding, maximum independent set, minimum vertex cover, subgraph and double subgraph isomorphism, set packing, set partitioning, and set cover. In the proposed approach, we first map these problems onto the maximum clique-finding problem (MCP), which is later solved using an evolutionary strategy. The proposed impatient EA with probabilistic tabu search (IEA-PTS) for the MCP integrates the best features of earlier successful approaches with a number of new heuristics that we developed to yield a performance that advances the state of the art in EAs for the exploration of the maximum cliques in a graph. Results of experimentation with the 37 DIMACS benchmark graphs and comparative analyses with six state-of-the-art algorithms, including two from the smaller EA community and four from the larger metaheuristics community, indicate that the IEA-PTS outperforms the EAs with respect to a Pareto-lexicographic ranking criterion and offers competitive performance on some graph instances when individually compared to the other heuristic algorithms. It has also successfully set a new benchmark on one graph instance. On another benchmark suite called Benchmarks with Hidden Optimal Solutions, IEA-PTS ranks second, after a very recent algorithm called COVER, among its peers that have experimented with this suite.
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem
NASA Astrophysics Data System (ADS)
Skakov, E. S.; Malysh, V. N.
2018-03-01
The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.
A 16-bit Coherent Ising Machine for One-Dimensional Ring and Cubic Graph Problems
NASA Astrophysics Data System (ADS)
Takata, Kenta; Marandi, Alireza; Hamerly, Ryan; Haribara, Yoshitaka; Maruo, Daiki; Tamate, Shuhei; Sakaguchi, Hiromasa; Utsunomiya, Shoko; Yamamoto, Yoshihisa
2016-09-01
Many tasks in our modern life, such as planning an efficient travel, image processing and optimizing integrated circuit design, are modeled as complex combinatorial optimization problems with binary variables. Such problems can be mapped to finding a ground state of the Ising Hamiltonian, thus various physical systems have been studied to emulate and solve this Ising problem. Recently, networks of mutually injected optical oscillators, called coherent Ising machines, have been developed as promising solvers for the problem, benefiting from programmability, scalability and room temperature operation. Here, we report a 16-bit coherent Ising machine based on a network of time-division-multiplexed femtosecond degenerate optical parametric oscillators. The system experimentally gives more than 99.6% of success rates for one-dimensional Ising ring and nondeterministic polynomial-time (NP) hard instances. The experimental and numerical results indicate that gradual pumping of the network combined with multiple spectral and temporal modes of the femtosecond pulses can improve the computational performance of the Ising machine, offering a new path for tackling larger and more complex instances.
Genetic algorithm parameters tuning for resource-constrained project scheduling problem
NASA Astrophysics Data System (ADS)
Tian, Xingke; Yuan, Shengrui
2018-04-01
Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.
A mixed analog/digital chaotic neuro-computer system for quadratic assignment problems.
Horio, Yoshihiko; Ikeguchi, Tohru; Aihara, Kazuyuki
2005-01-01
We construct a mixed analog/digital chaotic neuro-computer prototype system for quadratic assignment problems (QAPs). The QAP is one of the difficult NP-hard problems, and includes several real-world applications. Chaotic neural networks have been used to solve combinatorial optimization problems through chaotic search dynamics, which efficiently searches optimal or near optimal solutions. However, preliminary experiments have shown that, although it obtained good feasible solutions, the Hopfield-type chaotic neuro-computer hardware system could not obtain the optimal solution of the QAP. Therefore, in the present study, we improve the system performance by adopting a solution construction method, which constructs a feasible solution using the analog internal state values of the chaotic neurons at each iteration. In order to include the construction method into our hardware, we install a multi-channel analog-to-digital conversion system to observe the internal states of the chaotic neurons. We show experimentally that a great improvement in the system performance over the original Hopfield-type chaotic neuro-computer is obtained. That is, we obtain the optimal solution for the size-10 QAP in less than 1000 iterations. In addition, we propose a guideline for parameter tuning of the chaotic neuro-computer system according to the observation of the internal states of several chaotic neurons in the network.
GALAXY: A new hybrid MOEA for the optimal design of Water Distribution Systems
NASA Astrophysics Data System (ADS)
Wang, Q.; Savić, D. A.; Kapelan, Z.
2017-03-01
A new hybrid optimizer, called genetically adaptive leaping algorithm for approximation and diversity (GALAXY), is proposed for dealing with the discrete, combinatorial, multiobjective design of Water Distribution Systems (WDSs), which is NP-hard and computationally intensive. The merit of GALAXY is its ability to alleviate to a great extent the parameterization issue and the high computational overhead. It follows the generational framework of Multiobjective Evolutionary Algorithms (MOEAs) and includes six search operators and several important strategies. These operators are selected based on their leaping ability in the objective space from the global and local search perspectives. These strategies steer the optimization and balance the exploration and exploitation aspects simultaneously. A highlighted feature of GALAXY lies in the fact that it eliminates majority of parameters, thus being robust and easy-to-use. The comparative studies between GALAXY and three representative MOEAs on five benchmark WDS design problems confirm its competitiveness. GALAXY can identify better converged and distributed boundary solutions efficiently and consistently, indicating a much more balanced capability between the global and local search. Moreover, its advantages over other MOEAs become more substantial as the complexity of the design problem increases.
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin
Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.
MGA trajectory planning with an ACO-inspired algorithm
NASA Astrophysics Data System (ADS)
Ceriotti, Matteo; Vasile, Massimiliano
2010-11-01
Given a set of celestial bodies, the problem of finding an optimal sequence of swing-bys, deep space manoeuvres (DSM) and transfer arcs connecting the elements of the set is combinatorial in nature. The number of possible paths grows exponentially with the number of celestial bodies. Therefore, the design of an optimal multiple gravity assist (MGA) trajectory is a NP-hard mixed combinatorial-continuous problem. Its automated solution would greatly improve the design of future space missions, allowing the assessment of a large number of alternative mission options in a short time. This work proposes to formulate the complete automated design of a multiple gravity assist trajectory as an autonomous planning and scheduling problem. The resulting scheduled plan will provide the optimal planetary sequence and a good estimation of the set of associated optimal trajectories. The trajectory model consists of a sequence of celestial bodies connected by two-dimensional transfer arcs containing one DSM. For each transfer arc, the position of the planet and the spacecraft, at the time of arrival, are matched by varying the pericentre of the preceding swing-by, or the magnitude of the launch excess velocity, for the first arc. For each departure date, this model generates a full tree of possible transfers from the departure to the destination planet. Each leaf of the tree represents a planetary encounter and a possible way to reach that planet. An algorithm inspired by ant colony optimization (ACO) is devised to explore the space of possible plans. The ants explore the tree from departure to destination adding one node at the time: every time an ant is at a node, a probability function is used to select a feasible direction. This approach to automatic trajectory planning is applied to the design of optimal transfers to Saturn and among the Galilean moons of Jupiter. Solutions are compared to those found through more traditional genetic-algorithm techniques.
Automatically Generated Algorithms for the Vertex Coloring Problem
Contreras Bolton, Carlos; Gatica, Gustavo; Parada, Víctor
2013-01-01
The vertex coloring problem is a classical problem in combinatorial optimization that consists of assigning a color to each vertex of a graph such that no adjacent vertices share the same color, minimizing the number of colors used. Despite the various practical applications that exist for this problem, its NP-hardness still represents a computational challenge. Some of the best computational results obtained for this problem are consequences of hybridizing the various known heuristics. Automatically revising the space constituted by combining these techniques to find the most adequate combination has received less attention. In this paper, we propose exploring the heuristics space for the vertex coloring problem using evolutionary algorithms. We automatically generate three new algorithms by combining elementary heuristics. To evaluate the new algorithms, a computational experiment was performed that allowed comparing them numerically with existing heuristics. The obtained algorithms present an average 29.97% relative error, while four other heuristics selected from the literature present a 59.73% error, considering 29 of the more difficult instances in the DIMACS benchmark. PMID:23516506
Boyen, Peter; Van Dyck, Dries; Neven, Frank; van Ham, Roeland C H J; van Dijk, Aalt D J
2011-01-01
Correlated motif mining (cmm) is the problem of finding overrepresented pairs of patterns, called motifs, in sequences of interacting proteins. Algorithmic solutions for cmm thereby provide a computational method for predicting binding sites for protein interaction. In this paper, we adopt a motif-driven approach where the support of candidate motif pairs is evaluated in the network. We experimentally establish the superiority of the Chi-square-based support measure over other support measures. Furthermore, we obtain that cmm is an np-hard problem for a large class of support measures (including Chi-square) and reformulate the search for correlated motifs as a combinatorial optimization problem. We then present the generic metaheuristic slider which uses steepest ascent with a neighborhood function based on sliding motifs and employs the Chi-square-based support measure. We show that slider outperforms existing motif-driven cmm methods and scales to large protein-protein interaction networks. The slider-implementation and the data used in the experiments are available on http://bioinformatics.uhasselt.be.
Approximability of the d-dimensional Euclidean capacitated vehicle routing problem
NASA Astrophysics Data System (ADS)
Khachay, Michael; Dubinin, Roman
2016-10-01
Capacitated Vehicle Routing Problem (CVRP) is the well known intractable combinatorial optimization problem, which remains NP-hard even in the Euclidean plane. Since the introduction of this problem in the middle of the 20th century, many researchers were involved into the study of its approximability. Most of the results obtained in this field are based on the well known Iterated Tour Partition heuristic proposed by M. Haimovich and A. Rinnoy Kan in their celebrated paper, where they construct the first Polynomial Time Approximation Scheme (PTAS) for the single depot CVRP in ℝ2. For decades, this result was extended by many authors to numerous useful modifications of the problem taking into account multiple depots, pick up and delivery options, time window restrictions, etc. But, to the best of our knowledge, almost none of these results go beyond the Euclidean plane. In this paper, we try to bridge this gap and propose a EPTAS for the Euclidean CVRP for any fixed dimension.
Combining local search with co-evolution in a remarkably simple way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.
2000-05-01
The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.« less
Towards a theory of automated elliptic mesh generation
NASA Technical Reports Server (NTRS)
Cordova, J. Q.
1992-01-01
The theory of elliptic mesh generation is reviewed and the fundamental problem of constructing computational space is discussed. It is argued that the construction of computational space is an NP-Complete problem and therefore requires a nonstandard approach for its solution. This leads to the development of graph-theoretic, combinatorial optimization and integer programming algorithms. Methods for the construction of two dimensional computational space are presented.
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
NASA Astrophysics Data System (ADS)
Kunze, Herb; La Torre, Davide; Lin, Jianyi
2017-01-01
We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.
Optical solver of combinatorial problems: nanotechnological approach.
Cohen, Eyal; Dolev, Shlomi; Frenkel, Sergey; Kryzhanovsky, Boris; Palagushkin, Alexandr; Rosenblit, Michael; Zakharov, Victor
2013-09-01
We present an optical computing system to solve NP-hard problems. As nano-optical computing is a promising venue for the next generation of computers performing parallel computations, we investigate the application of submicron, or even subwavelength, computing device designs. The system utilizes a setup of exponential sized masks with exponential space complexity produced in polynomial time preprocessing. The masks are later used to solve the problem in polynomial time. The size of the masks is reduced to nanoscaled density. Simulations were done to choose a proper design, and actual implementations show the feasibility of such a system.
Monkey search algorithm for ECE components partitioning
NASA Astrophysics Data System (ADS)
Kuliev, Elmar; Kureichik, Vladimir; Kureichik, Vladimir, Jr.
2018-05-01
The paper considers one of the important design problems – a partitioning of electronic computer equipment (ECE) components (blocks). It belongs to the NP-hard class of problems and has a combinatorial and logic nature. In the paper, a partitioning problem formulation can be found as a partition of graph into parts. To solve the given problem, the authors suggest using a bioinspired approach based on a monkey search algorithm. Based on the developed software, computational experiments were carried out that show the algorithm efficiency, as well as its recommended settings for obtaining more effective solutions in comparison with a genetic algorithm.
On the complexity and approximability of some Euclidean optimal summing problems
NASA Astrophysics Data System (ADS)
Eremeev, A. V.; Kel'manov, A. V.; Pyatkin, A. V.
2016-10-01
The complexity status of several well-known discrete optimization problems with the direction of optimization switching from maximum to minimum is analyzed. The task is to find a subset of a finite set of Euclidean points (vectors). In these problems, the objective functions depend either only on the norm of the sum of the elements from the subset or on this norm and the cardinality of the subset. It is proved that, if the dimension of the space is a part of the input, then all these problems are strongly NP-hard. Additionally, it is shown that, if the space dimension is fixed, then all the problems are NP-hard even for dimension 2 (on a plane) and there are no approximation algorithms with a guaranteed accuracy bound for them unless P = NP. It is shown that, if the coordinates of the input points are integer, then all the problems can be solved in pseudopolynomial time in the case of a fixed space dimension.
Learning optimal quantum models is NP-hard
NASA Astrophysics Data System (ADS)
Stark, Cyril J.
2018-02-01
Physical modeling translates measured data into a physical model. Physical modeling is a major objective in physics and is generally regarded as a creative process. How good are computers at solving this task? Here, we show that in the absence of physical heuristics, the inference of optimal quantum models cannot be computed efficiently (unless P=NP ). This result illuminates rigorous limits to the extent to which computers can be used to further our understanding of nature.
New Hardness Results for Diophantine Approximation
NASA Astrophysics Data System (ADS)
Eisenbrand, Friedrich; Rothvoß, Thomas
We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.
Fabrication of naturel pumice/hydroxyapatite composite for biomedical engineering.
Komur, Baran; Lohse, Tim; Can, Hatice Merve; Khalilova, Gulnar; Geçimli, Zeynep Nur; Aydoğdu, Mehmet Onur; Kalkandelen, Cevriye; Stan, George E; Sahin, Yesim Muge; Sengil, Ahmed Zeki; Suleymanoglu, Mediha; Kuruca, Serap Erdem; Oktar, Faik Nuzhet; Salman, Serdar; Ekren, Nazmi; Ficai, Anton; Gunduz, Oguzhan
2016-07-07
We evaluated the Bovine hydroxyapatite (BHA) structure. BHA powder was admixed with 5 and 10 wt% natural pumice (NP). Compression strength, Vickers micro hardness, Fourier transform infrared spectroscopy, scanning electron microscopy (SEM) and X-ray diffraction studies were performed on the final NP-BHA composite products. The cells proliferation was investigated by MTT assay and SEM. Furthermore, the antimicrobial activity of NP-BHA samples was interrogated. Variances in the sintering temperature (for 5 wt% NP composites) between 1000 and 1300 °C, reveal about 700 % increase in the microhardness (~100 and 775 HV, respectively). Composites prepared at 1300 °C demonstrate the greatest compression strength with comparable result for 5 wt% NP content (87 MPa), which are significantly better than those for 10 wt% and those that do not include any NP (below 60 MPa, respectively). The results suggested the optimal parameters for the preparation of NP-BHA composites with increased mechanical properties and biocompatibility. Changes in micro-hardness and compression strength can be tailored by the tuning the NP concentration and sintering temperature. NP-BHA composites have demonstrated a remarkable potential for biomedical engineering applications such as bone graft and implant.
AI techniques for a space application scheduling problem
NASA Technical Reports Server (NTRS)
Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.
1991-01-01
Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).
Coelho, V N; Coelho, I M; Souza, M J F; Oliveira, T A; Cota, L P; Haddad, M N; Mladenovic, N; Silva, R C P; Guimarães, F G
2016-01-01
This article presents an Evolution Strategy (ES)--based algorithm, designed to self-adapt its mutation operators, guiding the search into the solution space using a Self-Adaptive Reduced Variable Neighborhood Search procedure. In view of the specific local search operators for each individual, the proposed population-based approach also fits into the context of the Memetic Algorithms. The proposed variant uses the Greedy Randomized Adaptive Search Procedure with different greedy parameters for generating its initial population, providing an interesting exploration-exploitation balance. To validate the proposal, this framework is applied to solve three different [Formula: see text]-Hard combinatorial optimization problems: an Open-Pit-Mining Operational Planning Problem with dynamic allocation of trucks, an Unrelated Parallel Machine Scheduling Problem with Setup Times, and the calibration of a hybrid fuzzy model for Short-Term Load Forecasting. Computational results point out the convergence of the proposed model and highlight its ability in combining the application of move operations from distinct neighborhood structures along the optimization. The results gathered and reported in this article represent a collective evidence of the performance of the method in challenging combinatorial optimization problems from different application domains. The proposed evolution strategy demonstrates an ability of adapting the strength of the mutation disturbance during the generations of its evolution process. The effectiveness of the proposal motivates the application of this novel evolutionary framework for solving other combinatorial optimization problems.
An evolutionary strategy based on partial imitation for solving optimization problems
NASA Astrophysics Data System (ADS)
Javarone, Marco Alberto
2016-12-01
In this work we introduce an evolutionary strategy to solve combinatorial optimization tasks, i.e. problems characterized by a discrete search space. In particular, we focus on the Traveling Salesman Problem (TSP), i.e. a famous problem whose search space grows exponentially, increasing the number of cities, up to becoming NP-hard. The solutions of the TSP can be codified by arrays of cities, and can be evaluated by fitness, computed according to a cost function (e.g. the length of a path). Our method is based on the evolution of an agent population by means of an imitative mechanism, we define 'partial imitation'. In particular, agents receive a random solution and then, interacting among themselves, may imitate the solutions of agents with a higher fitness. Since the imitation mechanism is only partial, agents copy only one entry (randomly chosen) of another array (i.e. solution). In doing so, the population converges towards a shared solution, behaving like a spin system undergoing a cooling process, i.e. driven towards an ordered phase. We highlight that the adopted 'partial imitation' mechanism allows the population to generate solutions over time, before reaching the final equilibrium. Results of numerical simulations show that our method is able to find, in a finite time, both optimal and suboptimal solutions, depending on the size of the considered search space.
Finding Minimal Addition Chains with a Particle Swarm Optimization Algorithm
NASA Astrophysics Data System (ADS)
León-Javier, Alejandro; Cruz-Cortés, Nareli; Moreno-Armendáriz, Marco A.; Orantes-Jiménez, Sandra
The addition chains with minimal length are the basic block to the optimal computation of finite field exponentiations. It has very important applications in the areas of error-correcting codes and cryptography. However, obtaining the shortest addition chains for a given exponent is a NP-hard problem. In this work we propose the adaptation of a Particle Swarm Optimization algorithm to deal with this problem. Our proposal is tested on several exponents whose addition chains are considered hard to find. We obtained very promising results.
Fuel management optimization using genetic algorithms and code independence
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1994-12-31
Fuel management optimization is a hard problem for traditional optimization techniques. Loading pattern optimization is a large combinatorial problem without analytical derivative information. Therefore, methods designed for continuous functions, such as linear programming, do not always work well. Genetic algorithms (GAs) address these problems and, therefore, appear ideal for fuel management optimization. They do not require derivative information and work well with combinatorial. functions. The GAs are a stochastic method based on concepts from biological genetics. They take a group of candidate solutions, called the population, and use selection, crossover, and mutation operators to create the next generation of bettermore » solutions. The selection operator is a {open_quotes}survival-of-the-fittest{close_quotes} operation and chooses the solutions for the next generation. The crossover operator is analogous to biological mating, where children inherit a mixture of traits from their parents, and the mutation operator makes small random changes to the solutions.« less
QAPgrid: A Two Level QAP-Based Approach for Large-Scale Data Analysis and Visualization
Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo
2011-01-01
Background The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain “hidden regularities” and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. Methodology/Principal Findings We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Conclusions/Significance Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics. PMID:21267077
QAPgrid: a two level QAP-based approach for large-scale data analysis and visualization.
Inostroza-Ponta, Mario; Berretta, Regina; Moscato, Pablo
2011-01-18
The visualization of large volumes of data is a computationally challenging task that often promises rewarding new insights. There is great potential in the application of new algorithms and models from combinatorial optimisation. Datasets often contain "hidden regularities" and a combined identification and visualization method should reveal these structures and present them in a way that helps analysis. While several methodologies exist, including those that use non-linear optimization algorithms, severe limitations exist even when working with only a few hundred objects. We present a new data visualization approach (QAPgrid) that reveals patterns of similarities and differences in large datasets of objects for which a similarity measure can be computed. Objects are assigned to positions on an underlying square grid in a two-dimensional space. We use the Quadratic Assignment Problem (QAP) as a mathematical model to provide an objective function for assignment of objects to positions on the grid. We employ a Memetic Algorithm (a powerful metaheuristic) to tackle the large instances of this NP-hard combinatorial optimization problem, and we show its performance on the visualization of real data sets. Overall, the results show that QAPgrid algorithm is able to produce a layout that represents the relationships between objects in the data set. Furthermore, it also represents the relationships between clusters that are feed into the algorithm. We apply the QAPgrid on the 84 Indo-European languages instance, producing a near-optimal layout. Next, we produce a layout of 470 world universities with an observed high degree of correlation with the score used by the Academic Ranking of World Universities compiled in the The Shanghai Jiao Tong University Academic Ranking of World Universities without the need of an ad hoc weighting of attributes. Finally, our Gene Ontology-based study on Saccharomyces cerevisiae fully demonstrates the scalability and precision of our method as a novel alternative tool for functional genomics.
Exploiting Bounded Signal Flow for Graph Orientation Based on Cause-Effect Pairs
NASA Astrophysics Data System (ADS)
Dorn, Britta; Hüffner, Falk; Krüger, Dominikus; Niedermeier, Rolf; Uhlmann, Johannes
We consider the following problem: Given an undirected network and a set of sender-receiver pairs, direct all edges such that the maximum number of "signal flows" defined by the pairs can be routed respecting edge directions. This problem has applications in communication networks and in understanding protein interaction based cell regulation mechanisms. Since this problem is NP-hard, research so far concentrated on polynomial-time approximation algorithms and tractable special cases. We take the viewpoint of parameterized algorithmics and examine several parameters related to the maximum signal flow over vertices or edges. We provide several fixed-parameter tractability results, and in one case a sharp complexity dichotomy between a linear-time solvable case and a slightly more general NP-hard case. We examine the value of these parameters for several real-world network instances. For many relevant cases, the NP-hard problem can be solved to optimality. In this way, parameterized analysis yields both deeper insight into the computational complexity and practical solving strategies.
Optimization of lattice surgery is NP-hard
NASA Astrophysics Data System (ADS)
Herr, Daniel; Nori, Franco; Devitt, Simon J.
2017-09-01
The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.
Adham, Manal T; Bentley, Peter J
2016-08-01
This paper proposes and evaluates a solution to the truck redistribution problem prominent in London's Santander Cycle scheme. Due to the complexity of this NP-hard combinatorial optimisation problem, no efficient optimisation techniques are known to solve the problem exactly. This motivates our use of the heuristic Artificial Ecosystem Algorithm (AEA) to find good solutions in a reasonable amount of time. The AEA is designed to take advantage of highly distributed computer architectures and adapt to changing problems. In the AEA a problem is first decomposed into its relative sub-components; they then evolve solution building blocks that fit together to form a single optimal solution. Three variants of the AEA centred on evaluating clustering methods are presented: the baseline AEA, the community-based AEA which groups stations according to journey flows, and the Adaptive AEA which actively modifies clusters to cater for changes in demand. We applied these AEA variants to the redistribution problem prominent in bike share schemes (BSS). The AEA variants are empirically evaluated using historical data from Santander Cycles to validate the proposed approach and prove its potential effectiveness. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A hybrid quantum-inspired genetic algorithm for multiobjective flow shop scheduling.
Li, Bin-Bin; Wang, Ling
2007-06-01
This paper proposes a hybrid quantum-inspired genetic algorithm (HQGA) for the multiobjective flow shop scheduling problem (FSSP), which is a typical NP-hard combinatorial optimization problem with strong engineering backgrounds. On the one hand, a quantum-inspired GA (QGA) based on Q-bit representation is applied for exploration in the discrete 0-1 hyperspace by using the updating operator of quantum gate and genetic operators of Q-bit. Moreover, random-key representation is used to convert the Q-bit representation to job permutation for evaluating the objective values of the schedule solution. On the other hand, permutation-based GA (PGA) is applied for both performing exploration in permutation-based scheduling space and stressing exploitation for good schedule solutions. To evaluate solutions in multiobjective sense, a randomly weighted linear-sum function is used in QGA, and a nondominated sorting technique including classification of Pareto fronts and fitness assignment is applied in PGA with regard to both proximity and diversity of solutions. To maintain the diversity of the population, two trimming techniques for population are proposed. The proposed HQGA is tested based on some multiobjective FSSPs. Simulation results and comparisons based on several performance metrics demonstrate the effectiveness of the proposed HQGA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, H.B. III; Rosenkrantz, D.J.; Stearns, R.E.
We study both the complexity and approximability of various graph and combinatorial problems specified using two dimensional narrow periodic specifications (see [CM93, HW92, KMW67, KO91, Or84b, Wa93]). The following two general kinds of results are presented. (1) We prove that a number of natural graph and combinatorial problems are NEXPTIME- or EXPSPACE-complete when instances are so specified; (2) In contrast, we prove that the optimization versions of several of these NEXPTIME-, EXPSPACE-complete problems have polynomial time approximation algorithms with constant performance guarantees. Moreover, some of these problems even have polynomial time approximation schemes. We also sketch how our NEXPTIME-hardness resultsmore » can be used to prove analogous NEXPTIME-hardness results for problems specified using other kinds of succinct specification languages. Our results provide the first natural problems for which there is a proven exponential (and possibly doubly exponential) gap between the complexities of finding exact and approximate solutions.« less
Robust quantum optimizer with full connectivity.
Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P
2017-04-01
Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.
OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. BOETTCHER; A. PERCUS
2000-08-01
We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less
Solforosi, Laura; Mancini, Nicasio; Canducci, Filippo; Clementi, Nicola; Sautto, Giuseppe Andrea; Diotti, Roberta Antonia; Clementi, Massimo; Burioni, Roberto
2012-07-01
A novel phagemid vector, named pCM, was optimized for the cloning and display of antibody fragment (Fab) libraries on the surface of filamentous phage. This vector contains two long DNA "stuffer" fragments for easier differentiation of the correctly cut forms of the vector. Moreover, in pCM the fragment at the heavy-chain cloning site contains an acid phosphatase-encoding gene allowing an easy distinction of the Escherichia coli cells containing the unmodified form of the phagemid versus the heavy-chain fragment coding cDNA. In pCM transcription of heavy-chain Fd/gene III and light chain is driven by a single lacZ promoter. The light chain is directed to the periplasm by the ompA signal peptide, whereas the heavy-chain Fd/coat protein III is trafficked by the pelB signal peptide. The phagemid pCM was used to generate a human combinatorial phage display antibody library that allowed the selection of a monoclonal Fab fragment antibody directed against the nucleoprotein (NP) of Influenza A virus.
Minimizing distortion and internal forces in truss structures by simulated annealing
NASA Technical Reports Server (NTRS)
Kincaid, Rex K.
1989-01-01
Inaccuracies in the length of members and the diameters of joints of large truss reflector backup structures may produce unacceptable levels of surface distortion and member forces. However, if the member lengths and joint diameters can be measured accurately it is possible to configure the members and joints so that root-mean-square (rms) surface error and/or rms member forces is minimized. Following Greene and Haftka (1989) it is assumed that the force vector f is linearly proportional to the member length errors e(sub M) of dimension NMEMB (the number of members) and joint errors e(sub J) of dimension NJOINT (the number of joints), and that the best-fit displacement vector d is a linear function of f. Let NNODES denote the number of positions on the surface of the truss where error influences are measured. The solution of the problem is discussed. To classify, this problem was compared to a similar combinatorial optimization problem. In particular, when only the member length errors are considered, minimizing d(sup 2)(sub rms) is equivalent to the quadratic assignment problem. The quadratic assignment problem is a well known NP-complete problem in operations research literature. Hence minimizing d(sup 2)(sub rms) is is also an NP-complete problem. The focus of the research is the development of a simulated annealing algorithm to reduce d(sup 2)(sub rms). The plausibility of this technique is its recent success on a variety of NP-complete combinatorial optimization problems including the quadratic assignment problem. A physical analogy for simulated annealing is the way liquids freeze and crystallize. All computational experiments were done on a MicroVAX. The two interchange heuristic is very fast but produces widely varying results. The two and three interchange heuristic provides less variability in the final objective function values but runs much more slowly. Simulated annealing produced the best objective function values for every starting configuration and was faster than the two and three interchange heuristic.
Landscape Encodings Enhance Optimization
Klemm, Konstantin; Mehta, Anita; Stadler, Peter F.
2012-01-01
Hard combinatorial optimization problems deal with the search for the minimum cost solutions (ground states) of discrete systems under strong constraints. A transformation of state variables may enhance computational tractability. It has been argued that these state encodings are to be chosen invertible to retain the original size of the state space. Here we show how redundant non-invertible encodings enhance optimization by enriching the density of low-energy states. In addition, smooth landscapes may be established on encoded state spaces to guide local search dynamics towards the ground state. PMID:22496860
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less
An Effective Mechanism for Virtual Machine Placement using Aco in IAAS Cloud
NASA Astrophysics Data System (ADS)
Shenbaga Moorthy, Rajalakshmi; Fareentaj, U.; Divya, T. K.
2017-08-01
Cloud computing provides an effective way to dynamically provide numerous resources to meet customer demands. A major challenging problem for cloud providers is designing efficient mechanisms for optimal virtual machine Placement (OVMP). Such mechanisms enable the cloud providers to effectively utilize their available resources and obtain higher profits. In order to provide appropriate resources to the clients an optimal virtual machine placement algorithm is proposed. Virtual machine placement is NP-Hard problem. Such NP-Hard problem can be solved using heuristic algorithm. In this paper, Ant Colony Optimization based virtual machine placement is proposed. Our proposed system focuses on minimizing the cost spending in each plan for hosting virtual machines in a multiple cloud provider environment and the response time of each cloud provider is monitored periodically, in such a way to minimize delay in providing the resources to the users. The performance of the proposed algorithm is compared with greedy mechanism. The proposed algorithm is simulated in Eclipse IDE. The results clearly show that the proposed algorithm minimizes the cost, response time and also number of migrations.
NASA Astrophysics Data System (ADS)
Kumar, Ravi; Singh, Surya Prakash
2017-11-01
The dynamic cellular facility layout problem (DCFLP) is a well-known NP-hard problem. It has been estimated that the efficient design of DCFLP reduces the manufacturing cost of products by maintaining the minimum material flow among all machines in all cells, as the material flow contributes around 10-30% of the total product cost. However, being NP hard, solving the DCFLP optimally is very difficult in reasonable time. Therefore, this article proposes a novel similarity score-based two-phase heuristic approach to solve the DCFLP optimally considering multiple products in multiple times to be manufactured in the manufacturing layout. In the first phase of the proposed heuristic, a machine-cell cluster is created based on similarity scores between machines. This is provided as an input to the second phase to minimize inter/intracell material handling costs and rearrangement costs over the entire planning period. The solution methodology of the proposed approach is demonstrated. To show the efficiency of the two-phase heuristic approach, 21 instances are generated and solved using the optimization software package LINGO. The results show that the proposed approach can optimally solve the DCFLP in reasonable time.
Cascaded Optimization for a Persistent Data Ferrying Unmanned Aircraft
NASA Astrophysics Data System (ADS)
Carfang, Anthony
This dissertation develops and assesses a cascaded method for designing optimal periodic trajectories and link schedules for an unmanned aircraft to ferry data between stationary ground nodes. This results in a fast solution method without the need to artificially constrain system dynamics. Focusing on a fundamental ferrying problem that involves one source and one destination, but includes complex vehicle and Radio-Frequency (RF) dynamics, a cascaded structure to the system dynamics is uncovered. This structure is exploited by reformulating the nonlinear optimization problem into one that reduces the independent control to the vehicle's motion, while the link scheduling control is folded into the objective function and implemented as an optimal policy that depends on candidate motion control. This formulation is proven to maintain optimality while reducing computation time in comparison to traditional ferry optimization methods. The discrete link scheduling problem takes the form of a combinatorial optimization problem that is known to be NP-Hard. A derived necessary condition for optimality guides the development of several heuristic algorithms, specifically the Most-Data-First Algorithm and the Knapsack Adaptation. These heuristics are extended to larger ferrying scenarios, and assessed analytically and through Monte Carlo simulation, showing better throughput performance in the same order of magnitude of computation time in comparison to other common link scheduling policies. The cascaded optimization method is implemented with a novel embedded software system on a small, unmanned aircraft to validate the simulation results with field experiments. To address the sensitivity of results on trajectory tracking performance, a system that combines motion and link control with waypoint-based navigation is developed and assessed through field experiments. The data ferrying algorithms are further extended by incorporating a Gaussian process to opportunistically learn the RF environment. By continuously improving RF models, the cascaded planner can continually improve the ferrying system's overall performance.
Analysis of tasks for dynamic man/machine load balancing in advanced helicopters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jorgensen, C.C.
1987-10-01
This report considers task allocation requirements imposed by advanced helicopter designs incorporating mixes of human pilots and intelligent machines. Specifically, it develops an analogy between load balancing using distributed non-homogeneous multiprocessors and human team functions. A taxonomy is presented which can be used to identify task combinations likely to cause overload for dynamic scheduling and process allocation mechanisms. Designer criteria are given for function decomposition, separation of control from data, and communication handling for dynamic tasks. Possible effects of n-p complete scheduling problems are noted and a class of combinatorial optimization methods are examined.
Robust quantum optimizer with full connectivity
Nigg, Simon E.; Lörch, Niels; Tiwari, Rakesh P.
2017-01-01
Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation. PMID:28435880
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitley, L. Darrell; Howe, Adele E.; Watson, Jean-Paul
2004-09-01
Tabu search is one of the most effective heuristics for locating high-quality solutions to a diverse array of NP-hard combinatorial optimization problems. Despite the widespread success of tabu search, researchers have a poor understanding of many key theoretical aspects of this algorithm, including models of the high-level run-time dynamics and identification of those search space features that influence problem difficulty. We consider these questions in the context of the job-shop scheduling problem (JSP), a domain where tabu search algorithms have been shown to be remarkably effective. Previously, we demonstrated that the mean distance between random local optima and the nearestmore » optimal solution is highly correlated with problem difficulty for a well-known tabu search algorithm for the JSP introduced by Taillard. In this paper, we discuss various shortcomings of this measure and develop a new model of problem difficulty that corrects these deficiencies. We show that Taillard's algorithm can be modeled with high fidelity as a simple variant of a straightforward random walk. The random walk model accounts for nearly all of the variability in the cost required to locate both optimal and sub-optimal solutions to random JSPs, and provides an explanation for differences in the difficulty of random versus structured JSPs. Finally, we discuss and empirically substantiate two novel predictions regarding tabu search algorithm behavior. First, the method for constructing the initial solution is highly unlikely to impact the performance of tabu search. Second, tabu tenure should be selected to be as small as possible while simultaneously avoiding search stagnation; values larger than necessary lead to significant degradations in performance.« less
Parallel-Batch Scheduling and Transportation Coordination with Waiting Time Constraint
Gong, Hua; Chen, Daheng; Xu, Ke
2014-01-01
This paper addresses a parallel-batch scheduling problem that incorporates transportation of raw materials or semifinished products before processing with waiting time constraint. The orders located at the different suppliers are transported by some vehicles to a manufacturing facility for further processing. One vehicle can load only one order in one shipment. Each order arriving at the facility must be processed in the limited waiting time. The orders are processed in batches on a parallel-batch machine, where a batch contains several orders and the processing time of the batch is the largest processing time of the orders in it. The goal is to find a schedule to minimize the sum of the total flow time and the production cost. We prove that the general problem is NP-hard in the strong sense. We also demonstrate that the problem with equal processing times on the machine is NP-hard. Furthermore, a dynamic programming algorithm in pseudopolynomial time is provided to prove its ordinarily NP-hardness. An optimal algorithm in polynomial time is presented to solve a special case with equal processing times and equal transportation times for each order. PMID:24883385
Optimization of Coil Element Configurations for a Matrix Gradient Coil.
Kroboth, Stefan; Layton, Kelvin J; Jia, Feng; Littin, Sebastian; Yu, Huijun; Hennig, Jurgen; Zaitsev, Maxim
2018-01-01
Recently, matrix gradient coils (also termed multi-coils or multi-coil arrays) were introduced for imaging and B 0 shimming with 24, 48, and even 84 coil elements. However, in imaging applications, providing one amplifier per coil element is not always feasible due to high cost and technical complexity. In this simulation study, we show that an 84-channel matrix gradient coil (head insert for brain imaging) is able to create a wide variety of field shapes even if the number of amplifiers is reduced. An optimization algorithm was implemented that obtains groups of coil elements, such that a desired target field can be created by driving each group with an amplifier. This limits the number of amplifiers to the number of coil element groups. Simulated annealing is used due to the NP-hard combinatorial nature of the given problem. A spherical harmonic basis set up to the full third order within a sphere of 20-cm diameter in the center of the coil was investigated as target fields. We show that the median normalized least squares error for all target fields is below approximately 5% for 12 or more amplifiers. At the same time, the dissipated power stays within reasonable limits. With a relatively small set of amplifiers, switches can be used to sequentially generate spherical harmonics up to third order. The costs associated with a matrix gradient coil can be lowered, which increases the practical utility of matrix gradient coils.
NASA Astrophysics Data System (ADS)
Zittersteijn, M.; Vananti, A.; Schildknecht, T.; Dolado Perez, J. C.; Martinot, V.
2016-11-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). The MTT problem quickly becomes an NP-hard combinatorial optimization problem. This means that the effort required to solve the MTT problem increases exponentially with the number of tracked objects. In an attempt to find an approximate solution of sufficient quality, several Population-Based Meta-Heuristic (PBMH) algorithms are implemented and tested on simulated optical measurements. These first results show that one of the tested algorithms, namely the Elitist Genetic Algorithm (EGA), consistently displays the desired behavior of finding good approximate solutions before reaching the optimum. The results further suggest that the algorithm possesses a polynomial time complexity, as the computation times are consistent with a polynomial model. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the association and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.
Colored Traveling Salesman Problem.
Li, Jun; Zhou, MengChu; Sun, Qirui; Dai, Xianzhong; Yu, Xiaolong
2015-11-01
The multiple traveling salesman problem (MTSP) is an important combinatorial optimization problem. It has been widely and successfully applied to the practical cases in which multiple traveling individuals (salesmen) share the common workspace (city set). However, it cannot represent some application problems where multiple traveling individuals not only have their own exclusive tasks but also share a group of tasks with each other. This work proposes a new MTSP called colored traveling salesman problem (CTSP) for handling such cases. Two types of city groups are defined, i.e., each group of exclusive cities of a single color for a salesman to visit and a group of shared cities of multiple colors allowing all salesmen to visit. Evidences show that CTSP is NP-hard and a multidepot MTSP and multiple single traveling salesman problems are its special cases. We present a genetic algorithm (GA) with dual-chromosome coding for CTSP and analyze the corresponding solution space. Then, GA is improved by incorporating greedy, hill-climbing (HC), and simulated annealing (SA) operations to achieve better performance. By experiments, the limitation of the exact solution method is revealed and the performance of the presented GAs is compared. The results suggest that SAGA can achieve the best quality of solutions and HCGA should be the choice making good tradeoff between the solution quality and computing time.
Fast optimization algorithms and the cosmological constant
NASA Astrophysics Data System (ADS)
Bao, Ning; Bousso, Raphael; Jordan, Stephen; Lackey, Brad
2017-11-01
Denef and Douglas have observed that in certain landscape models the problem of finding small values of the cosmological constant is a large instance of a problem that is hard for the complexity class NP (Nondeterministic Polynomial-time). The number of elementary operations (quantum gates) needed to solve this problem by brute force search exceeds the estimated computational capacity of the observable Universe. Here we describe a way out of this puzzling circumstance: despite being NP-hard, the problem of finding a small cosmological constant can be attacked by more sophisticated algorithms whose performance vastly exceeds brute force search. In fact, in some parameter regimes the average-case complexity is polynomial. We demonstrate this by explicitly finding a cosmological constant of order 10-120 in a randomly generated 1 09-dimensional Arkani-Hamed-Dimopoulos-Kachru landscape.
On unified modeling, theory, and method for solving multi-scale global optimization problems
NASA Astrophysics Data System (ADS)
Gao, David Yang
2016-10-01
A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
NASA Astrophysics Data System (ADS)
Iswari, T.; Asih, A. M. S.
2018-04-01
In the logistics system, transportation plays an important role to connect every element in the supply chain, but it can produces the greatest cost. Therefore, it is important to make the transportation costs as minimum as possible. Reducing the transportation cost can be done in several ways. One of the ways to minimizing the transportation cost is by optimizing the routing of its vehicles. It refers to Vehicle Routing Problem (VRP). The most common type of VRP is Capacitated Vehicle Routing Problem (CVRP). In CVRP, the vehicles have their own capacity and the total demands from the customer should not exceed the capacity of the vehicle. CVRP belongs to the class of NP-hard problems. These NP-hard problems make it more complex to solve such that exact algorithms become highly time-consuming with the increases in problem sizes. Thus, for large-scale problem instances, as typically found in industrial applications, finding an optimal solution is not practicable. Therefore, this paper uses two kinds of metaheuristics approach to solving CVRP. Those are Genetic Algorithm and Particle Swarm Optimization. This paper compares the results of both algorithms and see the performance of each algorithm. The results show that both algorithms perform well in solving CVRP but still needs to be improved. From algorithm testing and numerical example, Genetic Algorithm yields a better solution than Particle Swarm Optimization in total distance travelled.
A quantum annealing approach for fault detection and diagnosis of graph-based systems
NASA Astrophysics Data System (ADS)
Perdomo-Ortiz, A.; Fluegemann, J.; Narasimhan, S.; Biswas, R.; Smelyanskiy, V. N.
2015-02-01
Diagnosing the minimal set of faults capable of explaining a set of given observations, e.g., from sensor readouts, is a hard combinatorial optimization problem usually tackled with artificial intelligence techniques. We present the mapping of this combinatorial problem to quadratic unconstrained binary optimization (QUBO), and the experimental results of instances embedded onto a quantum annealing device with 509 quantum bits. Besides being the first time a quantum approach has been proposed for problems in the advanced diagnostics community, to the best of our knowledge this work is also the first research utilizing the route Problem → QUBO → Direct embedding into quantum hardware, where we are able to implement and tackle problem instances with sizes that go beyond previously reported toy-model proof-of-principle quantum annealing implementations; this is a significant leap in the solution of problems via direct-embedding adiabatic quantum optimization. We discuss some of the programmability challenges in the current generation of the quantum device as well as a few possible ways to extend this work to more complex arbitrary network graphs.
An effective PSO-based memetic algorithm for flow shop scheduling.
Liu, Bo; Wang, Ling; Jin, Yi-Hui
2007-02-01
This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed.
NASA Astrophysics Data System (ADS)
Hsiao, Ming-Chih; Su, Ling-Huey
2018-02-01
This research addresses the problem of scheduling hybrid machine types, in which one type is a two-machine flowshop and another type is a single machine. A job is either processed on the two-machine flowshop or on the single machine. The objective is to determine a production schedule for all jobs so as to minimize the makespan. The problem is NP-hard since the two parallel machines problem was proved to be NP-hard. Simulated annealing algorithms are developed to solve the problem optimally. A mixed integer programming (MIP) is developed and used to evaluate the performance for two SAs. Computational experiments demonstrate the efficiency of the simulated annealing algorithms, the quality of the simulated annealing algorithms will also be reported.
NASA Astrophysics Data System (ADS)
Taratula, Olena; Schumann, Canan; Duong, Tony; Taylor, Karmin L.; Taratula, Oleh
2015-02-01
Multifunctional theranostic platforms capable of concurrent near-infrared (NIR) fluorescence imaging and phototherapies are strongly desired for cancer diagnosis and treatment. However, the integration of separate imaging and therapeutic components into nanocarriers results in complex theranostic systems with limited translational potential. A single agent-based theranostic nanoplatform, therefore, was developed for concurrent NIR fluorescence imaging and combinatorial phototherapy with dual photodynamic (PDT) and photothermal (PTT) therapeutic mechanisms. The transformation of a substituted silicon naphthalocyanine (SiNc) into a biocompatible nanoplatform (SiNc-NP) was achieved by SiNc encapsulation into the hydrophobic interior of a generation 5 polypropylenimine dendrimer following surface modification with polyethylene glycol. Encapsulation provides aqueous solubility to SiNc and preserves its NIR fluorescence, PDT and PTT properties. Moreover, an impressive photostability in the dendrimer-encapsulated SiNc has been detected. Under NIR irradiation (785 nm, 1.3 W cm-2), SiNc-NP manifested robust heat generation capability (ΔT = 40 °C) and efficiently produced reactive oxygen species essential for PTT and PDT, respectively, without releasing SiNc from the nanopaltform. By varying the laser power density from 0.3 W cm-2 to 1.3 W cm-2 the therapeutic mechanism of SiNc-NP could be switched from PDT to combinatorial PDT-PTT treatment. In vitro and in vivo studies confirmed that phototherapy mediated by SiNc can efficiently destroy chemotherapy resistant ovarian cancer cells. Remarkably, solid tumors treated with a single dose of SiNc-NP combined with NIR irradiation were completely eradicated without cancer recurrence. Finally, the efficiency of SiNc-NP as an NIR imaging agent was confirmed by recording the strong fluorescence signal in the tumor, which was not photobleached during the phototherapeutic procedure.Multifunctional theranostic platforms capable of concurrent near-infrared (NIR) fluorescence imaging and phototherapies are strongly desired for cancer diagnosis and treatment. However, the integration of separate imaging and therapeutic components into nanocarriers results in complex theranostic systems with limited translational potential. A single agent-based theranostic nanoplatform, therefore, was developed for concurrent NIR fluorescence imaging and combinatorial phototherapy with dual photodynamic (PDT) and photothermal (PTT) therapeutic mechanisms. The transformation of a substituted silicon naphthalocyanine (SiNc) into a biocompatible nanoplatform (SiNc-NP) was achieved by SiNc encapsulation into the hydrophobic interior of a generation 5 polypropylenimine dendrimer following surface modification with polyethylene glycol. Encapsulation provides aqueous solubility to SiNc and preserves its NIR fluorescence, PDT and PTT properties. Moreover, an impressive photostability in the dendrimer-encapsulated SiNc has been detected. Under NIR irradiation (785 nm, 1.3 W cm-2), SiNc-NP manifested robust heat generation capability (ΔT = 40 °C) and efficiently produced reactive oxygen species essential for PTT and PDT, respectively, without releasing SiNc from the nanopaltform. By varying the laser power density from 0.3 W cm-2 to 1.3 W cm-2 the therapeutic mechanism of SiNc-NP could be switched from PDT to combinatorial PDT-PTT treatment. In vitro and in vivo studies confirmed that phototherapy mediated by SiNc can efficiently destroy chemotherapy resistant ovarian cancer cells. Remarkably, solid tumors treated with a single dose of SiNc-NP combined with NIR irradiation were completely eradicated without cancer recurrence. Finally, the efficiency of SiNc-NP as an NIR imaging agent was confirmed by recording the strong fluorescence signal in the tumor, which was not photobleached during the phototherapeutic procedure. Electronic supplementary information (ESI) available: Fig. S1-S5: Size distribution of SiNc-NP measured by dynamic light scattering (Fig. S1); absorption spectra of free SiNc 2 in THF before and after irradiation with the 785 nm laser diode for 30 min (Fig. S2); in vitro cytotoxicity of free DOX against A2780/AD human ovarian cancer cells (Fig. S3); the release profiles of SiNc from SiNc-NP under various conditions (Fig. S4); body weight curves of the mice with or without treatment (Fig. S5). See DOI: 10.1039/c4nr06050d
A Model and Algorithms For a Software Evolution Control System
1993-12-01
dynamic scheduling approaches can be found in [67). Task scheduling can also be characterized as preemptive and nonpreemptive . A task is preemptive ...is NP-hard for both the preemptive and nonpreemptive cases [671 [84). Scheduling nonpreemptive tasks with arbitrary ready times is NP-hard in both...the preemptive and nonpreemptive cases [671 [841. Scheduling nonpreemptive tasks with arbitrary ready times is NP-hard in both multiprocessor and
Arbitrary norm support vector machines.
Huang, Kaizhu; Zheng, Danian; King, Irwin; Lyu, Michael R
2009-02-01
Support vector machines (SVM) are state-of-the-art classifiers. Typically L2-norm or L1-norm is adopted as a regularization term in SVMs, while other norm-based SVMs, for example, the L0-norm SVM or even the L(infinity)-norm SVM, are rarely seen in the literature. The major reason is that L0-norm describes a discontinuous and nonconvex term, leading to a combinatorially NP-hard optimization problem. In this letter, motivated by Bayesian learning, we propose a novel framework that can implement arbitrary norm-based SVMs in polynomial time. One significant feature of this framework is that only a sequence of sequential minimal optimization problems needs to be solved, thus making it practical in many real applications. The proposed framework is important in the sense that Bayesian priors can be efficiently plugged into most learning methods without knowing the explicit form. Hence, this builds a connection between Bayesian learning and the kernel machines. We derive the theoretical framework, demonstrate how our approach works on the L0-norm SVM as a typical example, and perform a series of experiments to validate its advantages. Experimental results on nine benchmark data sets are very encouraging. The implemented L0-norm is competitive with or even better than the standard L2-norm SVM in terms of accuracy but with a reduced number of support vectors, -9.46% of the number on average. When compared with another sparse model, the relevance vector machine, our proposed algorithm also demonstrates better sparse properties with a training speed over seven times faster.
Computing quantum discord is NP-complete
NASA Astrophysics Data System (ADS)
Huang, Yichen
2014-03-01
We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable.
On Making a Distinguished Vertex Minimum Degree by Vertex Deletion
NASA Astrophysics Data System (ADS)
Betzler, Nadja; Bredereck, Robert; Niedermeier, Rolf; Uhlmann, Johannes
For directed and undirected graphs, we study the problem to make a distinguished vertex the unique minimum-(in)degree vertex through deletion of a minimum number of vertices. The corresponding NP-hard optimization problems are motivated by applications concerning control in elections and social network analysis. Continuing previous work for the directed case, we show that the problem is W[2]-hard when parameterized by the graph's feedback arc set number, whereas it becomes fixed-parameter tractable when combining the parameters "feedback vertex set number" and "number of vertices to delete". For the so far unstudied undirected case, we show that the problem is NP-hard and W[1]-hard when parameterized by the "number of vertices to delete". On the positive side, we show fixed-parameter tractability for several parameterizations measuring tree-likeness, including a vertex-linear problem kernel with respect to the parameter "feedback edge set number". On the contrary, we show a non-existence result concerning polynomial-size problem kernels for the combined parameter "vertex cover number and number of vertices to delete", implying corresponding nonexistence results when replacing vertex cover number by treewidth or feedback vertex set number.
Enabling Next-Generation Multicore Platforms in Embedded Applications
2014-04-01
mapping to sets 129 − 256 ) to the second page in memory, color 2 (sets 257 − 384) to the third page, and so on. Then, after the 32nd page, all 212 sets...the Real-Time Nested Locking Protocol (RNLP) [56], a recently developed multiprocessor real-time locking protocol that optimally supports the...RELEASE; DISTRIBUTION UNLIMITED 15 In general, the problems of optimally assigning tasks to processors and colors to tasks are both NP-hard in the
Aono, Masashi; Naruse, Makoto; Kim, Song-Ju; Wakabayashi, Masamitsu; Hori, Hirokazu; Ohtsu, Motoichi; Hara, Masahiko
2013-06-18
Biologically inspired computing devices and architectures are expected to overcome the limitations of conventional technologies in terms of solving computationally demanding problems, adapting to complex environments, reducing energy consumption, and so on. We previously demonstrated that a primitive single-celled amoeba (a plasmodial slime mold), which exhibits complex spatiotemporal oscillatory dynamics and sophisticated computing capabilities, can be used to search for a solution to a very hard combinatorial optimization problem. We successfully extracted the essential spatiotemporal dynamics by which the amoeba solves the problem. This amoeba-inspired computing paradigm can be implemented by various physical systems that exhibit suitable spatiotemporal dynamics resembling the amoeba's problem-solving process. In this Article, we demonstrate that photoexcitation transfer phenomena in certain quantum nanostructures mediated by optical near-field interactions generate the amoebalike spatiotemporal dynamics and can be used to solve the satisfiability problem (SAT), which is the problem of judging whether a given logical proposition (a Boolean formula) is self-consistent. SAT is related to diverse application problems in artificial intelligence, information security, and bioinformatics and is a crucially important nondeterministic polynomial time (NP)-complete problem, which is believed to become intractable for conventional digital computers when the problem size increases. We show that our amoeba-inspired computing paradigm dramatically outperforms a conventional stochastic search method. These results indicate the potential for developing highly versatile nanoarchitectonic computers that realize powerful solution searching with low energy consumption.
Evaluation of Recoverable-Robust Timetables on Tree Networks
NASA Astrophysics Data System (ADS)
D'Angelo, Gianlorenzo; di Stefano, Gabriele; Navarra, Alfredo
In the context of scheduling and timetabling, we study a challenging combinatorial problem which is interesting from both a practical and a theoretical point of view. The motivation behind it is to cope with scheduled activities which might be subject to unavoidable disturbances, such as delays, occurring during the operational phase. The idea is to preventively plan some extra time for the scheduled activities in order to be "prepared" if a delay occurs, and to absorb it without the necessity of re-scheduling the activities from scratch. This realizes the concept of designing so called robust timetables. During the planning phase, one has to consider recovery features that might be applied at runtime if delays occur. Such recovery capabilities are given as input along with the possible delays that must be considered. The objective is the minimization of the overall needed time. The quality of a robust timetable is measured by the price of robustness, i.e. the ratio between the cost of the robust timetable and that of a non-robust optimal timetable. The considered problem is known to be NP-hard. We propose a pseudo-polynomial time algorithm and apply it on random networks and real case scenarios provided by Italian railways. We evaluate the effect of robustness on the scheduling of the activities and provide the price of robustness with respect to different scenarios. We experimentally show the practical effectiveness and efficiency of the proposed algorithm.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir Hossein; Goldbert, Alan; Bagasol, Leonard Neil; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it is shown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Applying Graph Theory to Problems in Air Traffic Management
NASA Technical Reports Server (NTRS)
Farrahi, Amir H.; Goldberg, Alan T.; Bagasol, Leonard N.; Jung, Jaewoo
2017-01-01
Graph theory is used to investigate three different problems arising in air traffic management. First, using a polynomial reduction from a graph partitioning problem, it isshown that both the airspace sectorization problem and its incremental counterpart, the sector combination problem are NP-hard, in general, under several simple workload models. Second, using a polynomial time reduction from maximum independent set in graphs, it is shown that for any fixed e, the problem of finding a solution to the minimum delay scheduling problem in traffic flow management that is guaranteed to be within n1-e of the optimal, where n is the number of aircraft in the problem instance, is NP-hard. Finally, a problem arising in precision arrival scheduling is formulated and solved using graph reachability. These results demonstrate that graph theory provides a powerful framework for modeling, reasoning about, and devising algorithmic solutions to diverse problems arising in air traffic management.
Nanoparticle hardness controls the internalization pathway for drug delivery
NASA Astrophysics Data System (ADS)
Li, Ye; Zhang, Xianren; Cao, Dapeng
2015-01-01
Nanoparticle (NP)-based drug delivery systems offer fundamental advantages over current therapeutic agents that commonly display a longer circulation time, lower toxicity, specific targeted release, and greater bioavailability. For successful NP-based drug delivery it is essential that the drug-carrying nanocarriers can be internalized by the target cells and transported to specific sites, and the inefficient internalization of nanocarriers is often one of the major sources for drug resistance. In this work, we use the dissipative particle dynamics simulation to investigate the effect of NP hardness on their internalization efficiency. Three simplified models of NP platforms for drug delivery, including polymeric NP, liposome and solid NP, are designed here to represent increasing nanocarrier hardness. Simulation results indicate that NP hardness controls the internalization pathway for drug delivery. Rigid NPs can enter the cell by a pathway of endocytosis, whereas for soft NPs the endocytosis process can be inhibited or frustrated due to wrapping-induced shape deformation and non-uniform ligand distribution. Instead, soft NPs tend to find one of three penetration pathways to enter the cell membrane via rearranging their hydrophobic and hydrophilic segments. Finally, we show that the interaction between nanocarriers and drug molecules is also essential for effective drug delivery.
Better approximation guarantees for job-shop scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, L.A.; Paterson, M.; Srinivasan, A.
1997-06-01
Job-shop scheduling is a classical NP-hard problem. Shmoys, Stein & Wein presented the first polynomial-time approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work, and present further improvements for some important NP-hard special cases of this problem (e.g., in the preemptive case where machines can suspend work on operations and later resume). We also present NC algorithms with improved approximation guarantees for some NP-hard special cases.
Zalesak, J; Todt, J; Pitonak, R; Köpf, A; Weißenbacher, R; Sartory, B; Burghammer, M; Daniel, R; Keckes, J
2016-12-01
Because of the tremendous variability of crystallite sizes and shapes in nano-materials, it is challenging to assess the corresponding size-property relationships and to identify microstructures with particular physical properties or even optimized functions. This task is especially difficult for nanomaterials formed by self-organization, where the spontaneous evolution of microstructure and properties is coupled. In this work, two compositionally graded TiAlN films were (i) grown using chemical vapour deposition by applying a varying ratio of reacting gases and (ii) subsequently analysed using cross-sectional synchrotron X-ray nanodiffraction, electron microscopy and nanoindentation in order to evaluate the microstructure and hardness depth gradients. The results indicate the formation of self-organized hexagonal-cubic and cubic-cubic nanolamellae with varying compositions and thicknesses in the range of ∼3-15 nm across the film thicknesses, depending on the actual composition of the reactive gas mixtures. On the basis of the occurrence of the nanolamellae and their correlation with the local film hardness, progressively narrower ranges of the composition and hardness were refined in three steps. The third film was produced using an AlCl 3 /TiCl 4 precursor ratio of ∼1.9, resulting in the formation of an optimized lamellar microstructure with ∼1.3 nm thick cubic Ti(Al)N and ∼12 nm thick cubic Al(Ti)N nanolamellae which exhibits a maximal hardness of ∼36 GPa and an indentation modulus of ∼522 GPa. The presented approach of an iterative nanoscale search based on the application of cross-sectional synchrotron X-ray nanodiffraction and cross-sectional nanoindentation allows one to refine the relationship between (i) varying deposition conditions, (ii) gradients of microstructure and (iii) gradients of mechanical properties in nanostructured materials prepared as thin films. This is done in a combinatorial way in order to screen a wide range of deposition conditions, while identifying those that result in the formation of a particular microstructure with optimized functional attributes.
Combinatorial Effects of Arginine and Fluoride on Oral Bacteria
Zheng, X.; Cheng, X.; Wang, L.; Qiu, W.; Wang, S.; Zhou, Y.; Li, M.; Li, Y.; Cheng, L.; Li, J.; Zhou, X.
2015-01-01
Dental caries is closely associated with the microbial disequilibrium between acidogenic/aciduric pathogens and alkali-generating commensal residents within the dental plaque. Fluoride is a widely used anticaries agent, which promotes tooth hard-tissue remineralization and suppresses bacterial activities. Recent clinical trials have shown that oral hygiene products containing both fluoride and arginine possess a greater anticaries effect compared with those containing fluoride alone, indicating synergy between fluoride and arginine in caries management. Here, we hypothesize that arginine may augment the ecological benefit of fluoride by enriching alkali-generating bacteria in the plaque biofilm and thus synergizes with fluoride in controlling dental caries. Specifically, we assessed the combinatory effects of NaF/arginine on planktonic and biofilm cultures of Streptococcus mutans, Streptococcus sanguinis, and Porphyromonas gingivalis with checkerboard microdilution assays. The optimal NaF/arginine combinations were selected, and their combinatory effects on microbial composition were further examined in single-, dual-, and 3-species biofilm using bacterial species–specific fluorescence in situ hybridization and quantitative polymerase chain reaction. We found that arginine synergized with fluoride in suppressing acidogenic S. mutans in both planktonic and biofilm cultures. In addition, the NaF/arginine combination synergistically reduced S. mutans but enriched S. sanguinis within the multispecies biofilms. More importantly, the optimal combination of NaF/arginine maintained a “streptococcal pressure” against the potential growth of oral anaerobe P. gingivalis within the alkalized biofilm. Taken together, we conclude that the combinatory application of fluoride and arginine has a potential synergistic effect in maintaining a healthy oral microbial equilibrium and thus represents a promising ecological approach to caries management. PMID:25477312
Hypergraph-Based Combinatorial Optimization of Matrix-Vector Multiplication
ERIC Educational Resources Information Center
Wolf, Michael Maclean
2009-01-01
Combinatorial scientific computing plays an important enabling role in computational science, particularly in high performance scientific computing. In this thesis, we will describe our work on optimizing matrix-vector multiplication using combinatorial techniques. Our research has focused on two different problems in combinatorial scientific…
A new graph-based method for pairwise global network alignment
Klau, Gunnar W
2009-01-01
Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162
Smooth Constrained Heuristic Optimization of a Combinatorial Chemical Space
2015-05-01
ARL-TR-7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...
Neural Meta-Memes Framework for Combinatorial Optimization
NASA Astrophysics Data System (ADS)
Song, Li Qin; Lim, Meng Hiot; Ong, Yew Soon
In this paper, we present a Neural Meta-Memes Framework (NMMF) for combinatorial optimization. NMMF is a framework which models basic optimization algorithms as memes and manages them dynamically when solving combinatorial problems. NMMF encompasses neural networks which serve as the overall planner/coordinator to balance the workload between memes. We show the efficacy of the proposed NMMF through empirical study on a class of combinatorial problem, the quadratic assignment problem (QAP).
NASA Astrophysics Data System (ADS)
Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru
We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.
Adaptiveness in monotone pseudo-Boolean optimization and stochastic neural computation.
Grossi, Giuliano
2009-08-01
Hopfield neural network (HNN) is a nonlinear computational model successfully applied in finding near-optimal solutions of several difficult combinatorial problems. In many cases, the network energy function is obtained through a learning procedure so that its minima are states falling into a proper subspace (feasible region) of the search space. However, because of the network nonlinearity, a number of undesirable local energy minima emerge from the learning procedure, significantly effecting the network performance. In the neural model analyzed here, we combine both a penalty and a stochastic process in order to enhance the performance of a binary HNN. The penalty strategy allows us to gradually lead the search towards states representing feasible solutions, so avoiding oscillatory behaviors or asymptotically instable convergence. Presence of stochastic dynamics potentially prevents the network to fall into shallow local minima of the energy function, i.e., quite far from global optimum. Hence, for a given fixed network topology, the desired final distribution on the states can be reached by carefully modulating such process. The model uses pseudo-Boolean functions both to express problem constraints and cost function; a combination of these two functions is then interpreted as energy of the neural network. A wide variety of NP-hard problems fall in the class of problems that can be solved by the model at hand, particularly those having a monotonic quadratic pseudo-Boolean function as constraint function. That is, functions easily derived by closed algebraic expressions representing the constraint structure and easy (polynomial time) to maximize. We show the asymptotic convergence properties of this model characterizing its state space distribution at thermal equilibrium in terms of Markov chain and give evidence of its ability to find high quality solutions on benchmarks and randomly generated instances of two specific problems taken from the computational graph theory.
Combinatorial effects of arginine and fluoride on oral bacteria.
Zheng, X; Cheng, X; Wang, L; Qiu, W; Wang, S; Zhou, Y; Li, M; Li, Y; Cheng, L; Li, J; Zhou, X; Xu, X
2015-02-01
Dental caries is closely associated with the microbial disequilibrium between acidogenic/aciduric pathogens and alkali-generating commensal residents within the dental plaque. Fluoride is a widely used anticaries agent, which promotes tooth hard-tissue remineralization and suppresses bacterial activities. Recent clinical trials have shown that oral hygiene products containing both fluoride and arginine possess a greater anticaries effect compared with those containing fluoride alone, indicating synergy between fluoride and arginine in caries management. Here, we hypothesize that arginine may augment the ecological benefit of fluoride by enriching alkali-generating bacteria in the plaque biofilm and thus synergizes with fluoride in controlling dental caries. Specifically, we assessed the combinatory effects of NaF/arginine on planktonic and biofilm cultures of Streptococcus mutans, Streptococcus sanguinis, and Porphyromonas gingivalis with checkerboard microdilution assays. The optimal NaF/arginine combinations were selected, and their combinatory effects on microbial composition were further examined in single-, dual-, and 3-species biofilm using bacterial species-specific fluorescence in situ hybridization and quantitative polymerase chain reaction. We found that arginine synergized with fluoride in suppressing acidogenic S. mutans in both planktonic and biofilm cultures. In addition, the NaF/arginine combination synergistically reduced S. mutans but enriched S. sanguinis within the multispecies biofilms. More importantly, the optimal combination of NaF/arginine maintained a "streptococcal pressure" against the potential growth of oral anaerobe P. gingivalis within the alkalized biofilm. Taken together, we conclude that the combinatory application of fluoride and arginine has a potential synergistic effect in maintaining a healthy oral microbial equilibrium and thus represents a promising ecological approach to caries management. © International & American Associations for Dental Research 2014.
Intrinsic optimization using stochastic nanomagnets
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-01-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets. PMID:28295053
Intrinsic optimization using stochastic nanomagnets
NASA Astrophysics Data System (ADS)
Sutton, Brian; Camsari, Kerem Yunus; Behin-Aein, Behtash; Datta, Supriyo
2017-03-01
This paper draws attention to a hardware system which can be engineered so that its intrinsic physics is described by the generalized Ising model and can encode the solution to many important NP-hard problems as its ground state. The basic constituents are stochastic nanomagnets which switch randomly between the ±1 Ising states and can be monitored continuously with standard electronics. Their mutual interactions can be short or long range, and their strengths can be reconfigured as needed to solve specific problems and to anneal the system at room temperature. The natural laws of statistical mechanics guide the network of stochastic nanomagnets at GHz speeds through the collective states with an emphasis on the low energy states that represent optimal solutions. As proof-of-concept, we present simulation results for standard NP-complete examples including a 16-city traveling salesman problem using experimentally benchmarked models for spin-transfer torque driven stochastic nanomagnets.
Wieberger, Florian; Kolb, Tristan; Neuber, Christian; Ober, Christopher K; Schmidt, Hans-Werner
2013-04-08
In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Viswanath, R. N.; Polaki, S. R.; Rajaraman, R.
The scaling behavior of hardness with ligament diameter and vacancy defect concentration in nanoporous Au (np-Au) has been investigated using a combination of Vickers Hardness, Scanning electron microscopy, and positron lifetime measurements. It is shown that for np-Au, the hardness scales with the ligament diameter with an exponent of −0.3, that is, at variance with the conventional Hall-Petch exponent of −0.5 for bulk systems, as seen in the controlled experiments on cold worked Au with varying grain size. The hardness of np-Au correlates with the vacancy concentration C{sub V} within the ligaments, as estimated from positron lifetime experiments, and scalesmore » as C{sub V}{sup 1/2}, pointing to the interaction of dislocations with vacancies. The distinctive Hall-Petch exponent of −0.3 seen for np-Au, with ligament diameters in the range of 5–150 nm, is rationalized by invoking the constrained motion of dislocations along the ligaments.« less
Distance-Based Phylogenetic Methods Around a Polytomy.
Davidson, Ruth; Sullivant, Seth
2014-01-01
Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.
Characterizing L1-norm best-fit subspaces
NASA Astrophysics Data System (ADS)
Brooks, J. Paul; Dulá, José H.
2017-05-01
Fitting affine objects to data is the basis of many tools and methodologies in statistics, machine learning, and signal processing. The L1 norm is often employed to produce subspaces exhibiting a robustness to outliers and faulty observations. The L1-norm best-fit subspace problem is directly formulated as a nonlinear, nonconvex, and nondifferentiable optimization problem. The case when the subspace is a hyperplane can be solved to global optimality efficiently by solving a series of linear programs. The problem of finding the best-fit line has recently been shown to be NP-hard. We present necessary conditions for optimality for the best-fit subspace problem, and use them to characterize properties of optimal solutions.
Performance comparison of some evolutionary algorithms on job shop scheduling problems
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Rao, C. S. P.
2016-09-01
Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.
On-Orbit Range Set Applications
NASA Astrophysics Data System (ADS)
Holzinger, M.; Scheeres, D.
2011-09-01
History and methodology of Δv range set computation is briefly reviewed, followed by a short summary of the Δv optimal spacecraft servicing problem literature. Service vehicle placement is approached from a Δv range set viewpoint, providing a framework under which the analysis becomes quite geometric and intuitive. The optimal servicing tour design problem is shown to be a specific instantiation of the metric- Traveling Salesman Problem (TSP), which in general is an NP-hard problem. The Δv-TSP is argued to be quite similar to the Euclidean-TSP, for which approximate optimal solutions may be found in polynomial time. Applications of range sets are demonstrated using analytical and simulation results.
NASA Astrophysics Data System (ADS)
Bai, Danyu; Zhang, Zhihai
2014-08-01
This article investigates the open-shop scheduling problem with the optimal criterion of minimising the sum of quadratic completion times. For this NP-hard problem, the asymptotic optimality of the shortest processing time block (SPTB) heuristic is proven in the sense of limit. Moreover, three different improvements, namely, the job-insert scheme, tabu search and genetic algorithm, are introduced to enhance the quality of the original solution generated by the SPTB heuristic. At the end of the article, a series of numerical experiments demonstrate the convergence of the heuristic, the performance of the improvements and the effectiveness of the quadratic objective.
Optimal recombination in genetic algorithms for flowshop scheduling problems
NASA Astrophysics Data System (ADS)
Kovalenko, Julia
2016-10-01
The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.
Optimization of topological quantum algorithms using Lattice Surgery is hard
NASA Astrophysics Data System (ADS)
Herr, Daniel; Nori, Franco; Devitt, Simon
The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Vazifeh-Noshafagh, Samira; Taleizadeh, Ata Allah; Hajipour, Vahid; Mahmoudi, Amin
2017-01-01
This article presents a new multi-objective model for a facility location problem with congestion and pricing policies. This model considers situations in which immobile service facilities are congested by a stochastic demand following M/M/m/k queues. The presented model belongs to the class of mixed-integer nonlinear programming models and NP-hard problems. To solve such a hard model, a new multi-objective optimization algorithm based on a vibration theory, namely multi-objective vibration damping optimization (MOVDO), is developed. In order to tune the algorithms parameters, the Taguchi approach using a response metric is implemented. The computational results are compared with those of the non-dominated ranking genetic algorithm and non-dominated sorting genetic algorithm. The outputs demonstrate the robustness of the proposed MOVDO in large-sized problems.
Hernando, Leticia; Mendiburu, Alexander; Lozano, Jose A
2013-01-01
The solution of many combinatorial optimization problems is carried out by metaheuristics, which generally make use of local search algorithms. These algorithms use some kind of neighborhood structure over the search space. The performance of the algorithms strongly depends on the properties that the neighborhood imposes on the search space. One of these properties is the number of local optima. Given an instance of a combinatorial optimization problem and a neighborhood, the estimation of the number of local optima can help not only to measure the complexity of the instance, but also to choose the most convenient neighborhood to solve it. In this paper we review and evaluate several methods to estimate the number of local optima in combinatorial optimization problems. The methods reviewed not only come from the combinatorial optimization literature, but also from the statistical literature. A thorough evaluation in synthetic as well as real problems is given. We conclude by providing recommendations of methods for several scenarios.
ERIC Educational Resources Information Center
Kolata, Gina
1985-01-01
To determine how hard it is for computers to solve problems, researchers have classified groups of problems (polynomial hierarchy) according to how much time they seem to require for their solutions. A difficult and complex proof is offered which shows that a combinatorial approach (using Boolean circuits) may resolve the problem. (JN)
On the Complexity of Delaying an Adversary’s Project
2005-01-01
interdiction models for such problems and show that the resulting problem com- plexities run the gamut : polynomially solvable, weakly NP-complete, strongly...problems and show that the resulting problem complexities run the gamut : polynomially solvable, weakly NP-complete, strongly NP-complete or NP-hard. We
On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems
Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...
2015-10-30
In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less
Integer Linear Programming in Computational Biology
NASA Astrophysics Data System (ADS)
Althaus, Ernst; Klau, Gunnar W.; Kohlbacher, Oliver; Lenhof, Hans-Peter; Reinert, Knut
Computational molecular biology (bioinformatics) is a young research field that is rich in NP-hard optimization problems. The problem instances encountered are often huge and comprise thousands of variables. Since their introduction into the field of bioinformatics in 1997, integer linear programming (ILP) techniques have been successfully applied to many optimization problems. These approaches have added much momentum to development and progress in related areas. In particular, ILP-based approaches have become a standard optimization technique in bioinformatics. In this review, we present applications of ILP-based techniques developed by members and former members of Kurt Mehlhorn’s group. These techniques were introduced to bioinformatics in a series of papers and popularized by demonstration of their effectiveness and potential.
A Fast and Scalable Algorithm for Calculating the Achievable Capacity of a Wireless Mesh Network
2016-05-09
an optimal wireless transmission schedule for a predetermined set of links without the addition of routing is NP-Hard [5]. We effectively bypass the... wireless communications have used omni-directional antennas, where a user’s transmission inter- feres with others users in all directions. Different...interference from some particular transmission . Hence, δ = ∆(Ḡc) = max(i,j)∈E |Fij |. IV. ALGORITHM FOR RAPIDLY DETERMINING WIRELESS NETWORK CAPACITY In
NASA Astrophysics Data System (ADS)
Zittersteijn, Michiel; Schildknecht, Thomas; Vananti, Alessandro; Dolado Perez, Juan Carlos; Martinot, Vincent
2016-07-01
Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the correlation and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention. This problem is also known as the Multiple Target Tracking (MTT) problem. The complexity of the MTT problem is defined by its dimension S. Current research tends to focus on the S = 2 MTT problem. The reason for this is that for S = 2 the problem has a P-complexity. However, with S = 2 the decision to associate a set of observations is based on the minimum amount of information, in ambiguous situations (e.g. satellite clusters) this will lead to incorrect associations. The S > 2 MTT problem is an NP-hard combinatorial optimization problem. In previous work an Elitist Genetic Algorithm (EGA) was proposed as a method to approximately solve this problem. It was shown that the EGA is able to find a good approximate solution with a polynomial time complexity. The EGA relies on solving the Lambert problem in order to perform the necessary orbit determinations. This means that the algorithm is restricted to orbits that are described by Keplerian motion. The work presented in this paper focuses on the impact that this restriction has on the algorithm performance.
A Lifetime Maximization Relay Selection Scheme in Wireless Body Area Networks.
Zhang, Yu; Zhang, Bing; Zhang, Shi
2017-06-02
Network Lifetime is one of the most important metrics in Wireless Body Area Networks (WBANs). In this paper, a relay selection scheme is proposed under the topology constrains specified in the IEEE 802.15.6 standard to maximize the lifetime of WBANs through formulating and solving an optimization problem where relay selection of each node acts as optimization variable. Considering the diversity of the sensor nodes in WBANs, the optimization problem takes not only energy consumption rate but also energy difference among sensor nodes into account to improve the network lifetime performance. Since it is Non-deterministic Polynomial-hard (NP-hard) and intractable, a heuristic solution is then designed to rapidly address the optimization. The simulation results indicate that the proposed relay selection scheme has better performance in network lifetime compared with existing algorithms and that the heuristic solution has low time complexity with only a negligible performance degradation gap from optimal value. Furthermore, we also conduct simulations based on a general WBAN model to comprehensively illustrate the advantages of the proposed algorithm. At the end of the evaluation, we validate the feasibility of our proposed scheme via an implementation discussion.
Exact parallel algorithms for some members of the traveling salesman problem family
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pekny, J.F.
1989-01-01
The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less
Ryu, Joonghyun; Lee, Mokwon; Cha, Jehyun; Laskowski, Roman A.; Ryu, Seong Eon; Kim, Deok-Soo
2016-01-01
Many applications, such as protein design, homology modeling, flexible docking, etc. require the prediction of a protein's optimal side-chain conformations from just its amino acid sequence and backbone structure. Side-chain prediction (SCP) is an NP-hard energy minimization problem. Here, we present BetaSCPWeb which efficiently computes a conformation close to optimal using a geometry-prioritization method based on the Voronoi diagram of spherical atoms. Its outputs are visual, textual and PDB file format. The web server is free and open to all users at http://voronoi.hanyang.ac.kr/betascpweb with no login requirement. PMID:27151195
Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network.
Goto, Hayato
2016-02-22
The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.
Discrete Optimization Model for Vehicle Routing Problem with Scheduling Side Cosntraints
NASA Astrophysics Data System (ADS)
Juliandri, Dedy; Mawengkang, Herman; Bu'ulolo, F.
2018-01-01
Vehicle Routing Problem (VRP) is an important element of many logistic systems which involve routing and scheduling of vehicles from a depot to a set of customers node. This is a hard combinatorial optimization problem with the objective to find an optimal set of routes used by a fleet of vehicles to serve the demands a set of customers It is required that these vehicles return to the depot after serving customers’ demand. The problem incorporates time windows, fleet and driver scheduling, pick-up and delivery in the planning horizon. The goal is to determine the scheduling of fleet and driver and routing policies of the vehicles. The objective is to minimize the overall costs of all routes over the planning horizon. We model the problem as a linear mixed integer program. We develop a combination of heuristics and exact method for solving the model.
Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network
NASA Astrophysics Data System (ADS)
Goto, Hayato
2016-02-01
The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence.
Hardness and Elastic Modulus on Six-Fold Symmetry Gold Nanoparticles
Ramos, Manuel; Ortiz-Jordan, Luis; Hurtado-Macias, Abel; Flores, Sergio; Elizalde-Galindo, José T.; Rocha, Carmen; Torres, Brenda; Zarei-Chaleshtori, Maryam; Chianelli, Russell R.
2013-01-01
The chemical synthesis of gold nanoparticles (NP) by using gold (III) chloride trihydrate (HAuCl∙3H2O) and sodium citrate as a reducing agent in aqueous conditions at 100 °C is presented here. Gold nanoparticles areformed by a galvanic replacement mechanism as described by Lee and Messiel. Morphology of gold-NP was analyzed by way of high-resolution transmission electron microscopy; results indicate a six-fold icosahedral symmetry with an average size distribution of 22 nm. In order to understand the mechanical behaviors, like hardness and elastic moduli, gold-NP were subjected to nanoindentation measurements—obtaining a hardness value of 1.72 GPa and elastic modulus of 100 GPa in a 3–5 nm of displacement at the nanoparticle’s surface. PMID:28809302
Complex network problems in physics, computer science and biology
NASA Astrophysics Data System (ADS)
Cojocaru, Radu Ionut
There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe lattice at zero temperature and then we apply this formalism to the K-SAT problem defined in Chapter 1. The phase transition which physicists study often corresponds to a change in the computational complexity of the corresponding computer science problem. Chapter 3 presents phase transitions which are specific to the problems discussed in Chapter 1 and also known results for the K-SAT problem. We discuss the replica method and experimental evidences of replica symmetry breaking. The physics approach to hard problems is based on replica methods which are difficult to understand. In Chapter 4 we develop novel methods for studying hard problems using methods similar to the message passing techniques that were discussed in Chapter 2. Although we concentrated on the symmetric case, cavity methods show promise for generalizing our methods to the un-symmetric case. As has been highlighted by John Hopfield, several key features of biological systems are not shared by physical systems. Although living entities follow the laws of physics and chemistry, the fact that organisms adapt and reproduce introduces an essential ingredient that is missing in the physical sciences. In order to extract information from networks many algorithm have been developed. In Chapter 5 we apply polynomial algorithms like minimum spanning tree in order to study and construct gene regulatory networks from experimental data. As future work we propose the use of algorithms like min-cut/max-flow and Dijkstra for understanding key properties of these networks.
Approximation algorithms for the min-power symmetric connectivity problem
NASA Astrophysics Data System (ADS)
Plotnikov, Roman; Erzin, Adil; Mladenovic, Nenad
2016-10-01
We consider the NP-hard problem of synthesis of optimal spanning communication subgraph in a given arbitrary simple edge-weighted graph. This problem occurs in the wireless networks while minimizing the total transmission power consumptions. We propose several new heuristics based on the variable neighborhood search metaheuristic for the approximation solution of the problem. We have performed a numerical experiment where all proposed algorithms have been executed on the randomly generated test samples. For these instances, on average, our algorithms outperform the previously known heuristics.
Microbatteries for Combinatorial Studies of Conventional Lithium-Ion Batteries
NASA Technical Reports Server (NTRS)
West, William; Whitacre, Jay; Bugga, Ratnakumar
2003-01-01
Integrated arrays of microscopic solid-state batteries have been demonstrated in a continuing effort to develop microscopic sources of power and of voltage reference circuits to be incorporated into low-power integrated circuits. Perhaps even more importantly, arrays of microscopic batteries can be fabricated and tested in combinatorial experiments directed toward optimization and discovery of battery materials. The value of the combinatorial approach to optimization and discovery has been proven in the optoelectronic, pharmaceutical, and bioengineering industries. Depending on the specific application, the combinatorial approach can involve the investigation of hundreds or even thousands of different combinations; hence, it is time-consuming and expensive to attempt to implement the combinatorial approach by building and testing full-size, discrete cells and batteries. The conception of microbattery arrays makes it practical to bring the advantages of the combinatorial approach to the development of batteries.
Block clustering based on difference of convex functions (DC) programming and DC algorithms.
Le, Hoai Minh; Le Thi, Hoai An; Dinh, Tao Pham; Huynh, Van Ngai
2013-10-01
We investigate difference of convex functions (DC) programming and the DC algorithm (DCA) to solve the block clustering problem in the continuous framework, which traditionally requires solving a hard combinatorial optimization problem. DC reformulation techniques and exact penalty in DC programming are developed to build an appropriate equivalent DC program of the block clustering problem. They lead to an elegant and explicit DCA scheme for the resulting DC program. Computational experiments show the robustness and efficiency of the proposed algorithm and its superiority over standard algorithms such as two-mode K-means, two-mode fuzzy clustering, and block classification EM.
NASA Astrophysics Data System (ADS)
Liu, Jing; Meng, Guowen; Li, Zhongbo; Huang, Zhulin; Li, Xiangdong
2015-10-01
Surface-enhanced Raman scattering (SERS) is considered to be an excellent candidate for analytical detection schemes, because of its molecular specificity, rapid response and high sensitivity. Here, SERS-substrates of Ag-nanoparticle (Ag-NP) decorated Ge-nanotapers grafted on hexagonally ordered Si-micropillar (denoted as Ag-NP@Ge-nanotaper/Si-micropillar) arrays are fabricated via a combinatorial process of two-step etching to achieve hexagonal Si-micropillar arrays, chemical vapor deposition of flocky Ge-nanotapers on each Si-micropillar and decoration of Ag-NPs onto the Ge-nanotapers through galvanic displacement. With high density three-dimensional (3D) ``hot spots'' created from the large quantities of the neighboring Ag-NPs and large-scale uniform morphology, the hierarchical Ag-NP@Ge-nanotaper/Si-micropillar arrays exhibit strong and reproducible SERS activity. Using our hierarchical 3D SERS-substrates, both methyl parathion (a commonly used pesticide) and PCB-2 (one congener of highly toxic polychlorinated biphenyls) with concentrations down to 10-7 M and 10-5 M have been detected respectively, showing great potential in SERS-based rapid trace-level detection of toxic organic pollutants in the environment.Surface-enhanced Raman scattering (SERS) is considered to be an excellent candidate for analytical detection schemes, because of its molecular specificity, rapid response and high sensitivity. Here, SERS-substrates of Ag-nanoparticle (Ag-NP) decorated Ge-nanotapers grafted on hexagonally ordered Si-micropillar (denoted as Ag-NP@Ge-nanotaper/Si-micropillar) arrays are fabricated via a combinatorial process of two-step etching to achieve hexagonal Si-micropillar arrays, chemical vapor deposition of flocky Ge-nanotapers on each Si-micropillar and decoration of Ag-NPs onto the Ge-nanotapers through galvanic displacement. With high density three-dimensional (3D) ``hot spots'' created from the large quantities of the neighboring Ag-NPs and large-scale uniform morphology, the hierarchical Ag-NP@Ge-nanotaper/Si-micropillar arrays exhibit strong and reproducible SERS activity. Using our hierarchical 3D SERS-substrates, both methyl parathion (a commonly used pesticide) and PCB-2 (one congener of highly toxic polychlorinated biphenyls) with concentrations down to 10-7 M and 10-5 M have been detected respectively, showing great potential in SERS-based rapid trace-level detection of toxic organic pollutants in the environment. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06001j
Social interaction as a heuristic for combinatorial optimization problems
NASA Astrophysics Data System (ADS)
Fontanari, José F.
2010-11-01
We investigate the performance of a variant of Axelrod’s model for dissemination of culture—the Adaptive Culture Heuristic (ACH)—on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size F by a Boolean Binary Perceptron. In this heuristic, N agents, characterized by binary strings of length F which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents’ strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N1/4 so that the number of agents must increase with the fourth power of the problem size, N∝F4 , to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F6 which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean binary perceptron, given a fixed probability of success.
Engineering on-chip nanoporous gold material libraries via precision photothermal treatment
NASA Astrophysics Data System (ADS)
Chapman, Christopher A. R.; Wang, Ling; Biener, Juergen; Seker, Erkin; Biener, Monika M.; Matthews, Manyalibo J.
2015-12-01
Libraries of nanostructured materials on a single chip are a promising platform for high throughput and combinatorial studies of structure-property relationships in the fields of physics and biology. Nanoporous gold (np-Au), produced by an alloy corrosion process, is a nanostructured material specifically suited for such studies because of its self-similar thermally induced coarsening behavior. However, traditional heat application techniques for the modification of np-Au are bulk processes that cannot be used to generate a library of different pore sizes on a single chip. Here, laser micro-processing offers an attractive solution to this problem by providing a means to apply energy with high spatial and temporal resolution. In the present study we use finite element multiphysics simulations to predict the effects of laser mode (continuous-wave vs. pulsed) and thermal conductivity of the supporting substrate on the local np-Au film temperatures during photothermal annealing. Based on these results we discuss the mechanisms by which the np-Au network is coarsened. Thermal transport simulations predict that continuous-wave mode laser irradiation of np-Au thin films on a silicon substrate supports the widest range of morphologies that can be created through photothermal annealing of np-Au. Using the guidance provided by simulations, we successfully fabricate an on-chip material library consisting of 81 np-Au samples of 9 different morphologies for use in the parallel study of structure-property relationships.Libraries of nanostructured materials on a single chip are a promising platform for high throughput and combinatorial studies of structure-property relationships in the fields of physics and biology. Nanoporous gold (np-Au), produced by an alloy corrosion process, is a nanostructured material specifically suited for such studies because of its self-similar thermally induced coarsening behavior. However, traditional heat application techniques for the modification of np-Au are bulk processes that cannot be used to generate a library of different pore sizes on a single chip. Here, laser micro-processing offers an attractive solution to this problem by providing a means to apply energy with high spatial and temporal resolution. In the present study we use finite element multiphysics simulations to predict the effects of laser mode (continuous-wave vs. pulsed) and thermal conductivity of the supporting substrate on the local np-Au film temperatures during photothermal annealing. Based on these results we discuss the mechanisms by which the np-Au network is coarsened. Thermal transport simulations predict that continuous-wave mode laser irradiation of np-Au thin films on a silicon substrate supports the widest range of morphologies that can be created through photothermal annealing of np-Au. Using the guidance provided by simulations, we successfully fabricate an on-chip material library consisting of 81 np-Au samples of 9 different morphologies for use in the parallel study of structure-property relationships. Electronic supplementary information (ESI) available: Details of sample preparation, fabrication of material libraries, as well as further analysis and supporting scanning electron micrographs can be found in ESI. See DOI: 10.1039/c5nr04580k
On the complexity of some quadratic Euclidean 2-clustering problems
NASA Astrophysics Data System (ADS)
Kel'manov, A. V.; Pyatkin, A. V.
2016-03-01
Some problems of partitioning a finite set of points of Euclidean space into two clusters are considered. In these problems, the following criteria are minimized: (1) the sum over both clusters of the sums of squared pairwise distances between the elements of the cluster and (2) the sum of the (multiplied by the cardinalities of the clusters) sums of squared distances from the elements of the cluster to its geometric center, where the geometric center (or centroid) of a cluster is defined as the mean value of the elements in that cluster. Additionally, another problem close to (2) is considered, where the desired center of one of the clusters is given as input, while the center of the other cluster is unknown (is the variable to be optimized) as in problem (2). Two variants of the problems are analyzed, in which the cardinalities of the clusters are (1) parts of the input or (2) optimization variables. It is proved that all the considered problems are strongly NP-hard and that, in general, there is no fully polynomial-time approximation scheme for them (unless P = NP).
Bifurcation-based approach reveals synergism and optimal combinatorial perturbation.
Liu, Yanwei; Li, Shanshan; Liu, Zengrong; Wang, Ruiqi
2016-06-01
Cells accomplish the process of fate decisions and form terminal lineages through a series of binary choices in which cells switch stable states from one branch to another as the interacting strengths of regulatory factors continuously vary. Various combinatorial effects may occur because almost all regulatory processes are managed in a combinatorial fashion. Combinatorial regulation is crucial for cell fate decisions because it may effectively integrate many different signaling pathways to meet the higher regulation demand during cell development. However, whether the contribution of combinatorial regulation to the state transition is better than that of a single one and if so, what the optimal combination strategy is, seem to be significant issue from the point of view of both biology and mathematics. Using the approaches of combinatorial perturbations and bifurcation analysis, we provide a general framework for the quantitative analysis of synergism in molecular networks. Different from the known methods, the bifurcation-based approach depends only on stable state responses to stimuli because the state transition induced by combinatorial perturbations occurs between stable states. More importantly, an optimal combinatorial perturbation strategy can be determined by investigating the relationship between the bifurcation curve of a synergistic perturbation pair and the level set of a specific objective function. The approach is applied to two models, i.e., a theoretical multistable decision model and a biologically realistic CREB model, to show its validity, although the approach holds for a general class of biological systems.
Rao, Srinivas S.; Kong, Wing-Pui; Wei, Chih-Jen; Van Hoeven, Neal; Gorres, J. Patrick; Nason, Martha; Andersen, Hanne; Tumpey, Terrence M.; Nabel, Gary J.
2010-01-01
Efforts to develop a broadly protective vaccine against the highly pathogenic avian influenza A (HPAI) H5N1 virus have focused on highly conserved influenza gene products. The viral nucleoprotein (NP) and ion channel matrix protein (M2) are highly conserved among different strains and various influenza A subtypes. Here, we investigate the relative efficacy of NP and M2 compared to HA in protecting against HPAI H5N1 virus. In mice, previous studies have shown that vaccination with NP and M2 in recombinant DNA and/or adenovirus vectors or with adjuvants confers protection against lethal challenge in the absence of HA. However, we find that the protective efficacy of NP and M2 diminishes as the virulence and dose of the challenge virus are increased. To explore this question in a model relevant to human disease, ferrets were immunized with DNA/rAd5 vaccines encoding NP, M2, HA, NP+M2 or HA+NP+M2. Only HA or HA+NP+M2 vaccination conferred protection against a stringent virus challenge. Therefore, while gene-based vaccination with NP and M2 may provide moderate levels of protection against low challenge doses, it is insufficient to confer protective immunity against high challenge doses of H5N1 in ferrets. These immunogens may require combinatorial vaccination with HA, which confers protection even against very high doses of lethal viral challenge. PMID:20352112
Ryu, Joonghyun; Lee, Mokwon; Cha, Jehyun; Laskowski, Roman A; Ryu, Seong Eon; Kim, Deok-Soo
2016-07-08
Many applications, such as protein design, homology modeling, flexible docking, etc. require the prediction of a protein's optimal side-chain conformations from just its amino acid sequence and backbone structure. Side-chain prediction (SCP) is an NP-hard energy minimization problem. Here, we present BetaSCPWeb which efficiently computes a conformation close to optimal using a geometry-prioritization method based on the Voronoi diagram of spherical atoms. Its outputs are visual, textual and PDB file format. The web server is free and open to all users at http://voronoi.hanyang.ac.kr/betascpweb with no login requirement. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Gobin, Oliver C; Schüth, Ferdi
2008-01-01
Genetic algorithms are widely used to solve and optimize combinatorial problems and are more often applied for library design in combinatorial chemistry. Because of their flexibility, however, their implementation can be challenging. In this study, the influence of the representation of solid catalysts on the performance of genetic algorithms was systematically investigated on the basis of a new, constrained, multiobjective, combinatorial test problem with properties common to problems in combinatorial materials science. Constraints were satisfied by penalty functions, repair algorithms, or special representations. The tests were performed using three state-of-the-art evolutionary multiobjective algorithms by performing 100 optimization runs for each algorithm and test case. Experimental data obtained during the optimization of a noble metal-free solid catalyst system active in the selective catalytic reduction of nitric oxide with propene was used to build up a predictive model to validate the results of the theoretical test problem. A significant influence of the representation on the optimization performance was observed. Binary encodings were found to be the preferred encoding in most of the cases, and depending on the experimental test unit, repair algorithms or penalty functions performed best.
Quantum Resonance Approach to Combinatorial Optimization
NASA Technical Reports Server (NTRS)
Zak, Michail
1997-01-01
It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.
Parallel computation with molecular-motor-propelled agents in nanofabricated networks.
Nicolau, Dan V; Lard, Mercy; Korten, Till; van Delft, Falco C M J M; Persson, Malin; Bengtsson, Elina; Månsson, Alf; Diez, Stefan; Linke, Heiner; Nicolau, Dan V
2016-03-08
The combinatorial nature of many important mathematical problems, including nondeterministic-polynomial-time (NP)-complete problems, places a severe limitation on the problem size that can be solved with conventional, sequentially operating electronic computers. There have been significant efforts in conceiving parallel-computation approaches in the past, for example: DNA computation, quantum computation, and microfluidics-based computation. However, these approaches have not proven, so far, to be scalable and practical from a fabrication and operational perspective. Here, we report the foundations of an alternative parallel-computation system in which a given combinatorial problem is encoded into a graphical, modular network that is embedded in a nanofabricated planar device. Exploring the network in a parallel fashion using a large number of independent, molecular-motor-propelled agents then solves the mathematical problem. This approach uses orders of magnitude less energy than conventional computers, thus addressing issues related to power consumption and heat dissipation. We provide a proof-of-concept demonstration of such a device by solving, in a parallel fashion, the small instance {2, 5, 9} of the subset sum problem, which is a benchmark NP-complete problem. Finally, we discuss the technical advances necessary to make our system scalable with presently available technology.
Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems
Fonseca Guerra, Gabriel A.; Furber, Steve B.
2017-01-01
Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart. PMID:29311791
A methodology to find the elementary landscape decomposition of combinatorial optimization problems.
Chicano, Francisco; Whitley, L Darrell; Alba, Enrique
2011-01-01
A small number of combinatorial optimization problems have search spaces that correspond to elementary landscapes, where the objective function f is an eigenfunction of the Laplacian that describes the neighborhood structure of the search space. Many problems are not elementary; however, the objective function of a combinatorial optimization problem can always be expressed as a superposition of multiple elementary landscapes if the underlying neighborhood used is symmetric. This paper presents theoretical results that provide the foundation for algebraic methods that can be used to decompose the objective function of an arbitrary combinatorial optimization problem into a sum of subfunctions, where each subfunction is an elementary landscape. Many steps of this process can be automated, and indeed a software tool could be developed that assists the researcher in finding a landscape decomposition. This methodology is then used to show that the subset sum problem is a superposition of two elementary landscapes, and to show that the quadratic assignment problem is a superposition of three elementary landscapes.
Courses timetabling problem by minimizing the number of less preferable time slots
NASA Astrophysics Data System (ADS)
Oktavia, M.; Aman, A.; Bakhtiar, T.
2017-01-01
In an organization with large number of resources, timetabling is one of the most important factors of management strategy and the one that is most prone to errors or issues. Timetabling the perfect organization plan is quite a task, thus the aid of operations research or management strategy approaches is obligation. Timetabling in educational institutions can roughly be categorized into school timetabling, course timetabling, and examination timetabling, which differ from each other by their entities involved such as the type of events, the kind of institution, and the type and the relative influence of constraints. Education timetabling problem is generally a kind of complex combinatorial problem consisting of NP-complete sub-problems. It is required that the requested timetable fulfills a set of hard and soft constraints of various types. In this paper we consider a courses timetabling problem at university whose objective is to minimize the number of less preferable time slots. We mean by less preferable time slots are those devoted in early morning (07.00 - 07.50 AM) or those in the late afternoon (17.00 - 17.50 AM) that in fact beyond the working hour, those scheduled during the lunch break (12.00 - 12.50 AM), those scheduled in Wednesday 10.00 - 11.50 AM that coincides with Department Meeting, and those in Saturday which should be in fact devoted for day-off. In some cases, timetable with a number of activities scheduled in abovementioned time slots are commonly encountered. The courses timetabling for the Educational Program of General Competence (PPKU) students at odd semester at Bogor Agricultural University (IPB) has been modelled in the framework of the integer linear programming. We solved the optimization problem heuristically by categorizing all the groups into seven clusters.
Gaussian Mean Field Lattice Gas
NASA Astrophysics Data System (ADS)
Scoppola, Benedetto; Troiani, Alessio
2018-03-01
We study rigorously a lattice gas version of the Sherrington-Kirckpatrick spin glass model. In discrete optimization literature this problem is known as unconstrained binary quadratic programming and it belongs to the class NP-hard. We prove that the fluctuations of the ground state energy tend to vanish in the thermodynamic limit, and we give a lower bound of such ground state energy. Then we present a heuristic algorithm, based on a probabilistic cellular automaton, which seems to be able to find configurations with energy very close to the minimum, even for quite large instances.
Optimizing Irregular Applications for Energy and Performance on the Tilera Many-core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chavarría-Miranda, Daniel; Panyala, Ajay R.; Halappanavar, Mahantesh
Optimizing applications simultaneously for energy and performance is a complex problem. High performance, parallel, irregular applications are notoriously hard to optimize due to their data-dependent memory accesses, lack of structured locality and complex data structures and code patterns. Irregular kernels are growing in importance in applications such as machine learning, graph analytics and combinatorial scientific computing. Performance- and energy-efficient implementation of these kernels on modern, energy efficient, multicore and many-core platforms is therefore an important and challenging problem. We present results from optimizing two irregular applications { the Louvain method for community detection (Grappolo), and high-performance conjugate gradient (HPCCG) {more » on the Tilera many-core system. We have significantly extended MIT's OpenTuner auto-tuning framework to conduct a detailed study of platform-independent and platform-specific optimizations to improve performance as well as reduce total energy consumption. We explore the optimization design space along three dimensions: memory layout schemes, compiler-based code transformations, and optimization of parallel loop schedules. Using auto-tuning, we demonstrate whole node energy savings of up to 41% relative to a baseline instantiation, and up to 31% relative to manually optimized variants.« less
Fedeli, Chiara; Segat, Daniela; Tavano, Regina; Bubacco, Luigi; De Franceschi, Giorgia; de Laureto, Patrizia Polverino; Lubian, Elisa; Selvestrel, Francesco; Mancin, Fabrizio; Papini, Emanuele
2015-11-14
A coat of strongly-bound host proteins, or hard corona, may influence the biological and pharmacological features of nanotheranostics by altering their cell-interaction selectivity and macrophage clearance. With the goal of identifying specific corona-effectors, we investigated how the capture of amorphous silica nanoparticles (SiO2-NPs; Ø = 26 nm; zeta potential = -18.3 mV) by human lymphocytes, monocytes and macrophages is modulated by the prominent proteins of their plasma corona. LC MS/MS analysis, western blotting and quantitative SDS-PAGE densitometry show that Histidine Rich Glycoprotein (HRG) is the most abundant component of the SiO2-NP hard corona in excess plasma from humans (HP) and mice (MP), together with minor amounts of the homologous Kininogen-1 (Kin-1), while it is remarkably absent in their Foetal Calf Serum (FCS)-derived corona. HRG binds with high affinity to SiO2-NPs (HRG Kd ∼2 nM) and competes with other plasma proteins for the NP surface, so forming a stable and quite homogeneous corona inhibiting nanoparticles binding to the macrophage membrane and their subsequent uptake. Conversely, in the case of lymphocytes and monocytes not only HRG but also several common plasma proteins can interchange in this inhibitory activity. The depletion of HRG and Kin-1 from HP or their plasma exhaustion by increasing NP concentration (>40 μg ml(-1) in 10% HP) lead to a heterogeneous hard corona, mostly formed by fibrinogen (Fibr), HDLs, LDLs, IgGs, Kallikrein and several minor components, allowing nanoparticle binding to macrophages. Consistently, the FCS-derived SiO2-NP hard corona, mainly formed by hemoglobin, α2 macroglobulin and HDLs but lacking HRG, permits nanoparticle uptake by macrophages. Moreover, purified HRG competes with FCS proteins for the NP surface, inhibiting their recruitment in the corona and blocking NP macrophage capture. HRG, the main component of the plasma-derived SiO2-NPs' hard corona, has antiopsonin characteristics and uniquely confers to these particles the ability to evade macrophage capture.
Poly(methacrylic acid)-Coated Gold Nanoparticles: Functional Platforms for Theranostic Applications.
Yilmaz, Gokhan; Demir, Bilal; Timur, Suna; Becer, C Remzi
2016-09-12
The integration of drugs with nanomaterials have received significant interest in the efficient drug delivery systems. Conventional treatments with therapeutically active drugs may cause undesired side effects and, thus, novel strategies to perform these treatments with a combinatorial approach of therapeutic modalities are required. In this study, polymethacrylic acid coated gold nanoparticles (AuNP-PMAA), which were synthesized with reversible addition-fragmentation chain transfer (RAFT) polymerization, were combined with doxorubicin (DOX) as a model anticancer drug by creating a pH-sensitive hydrazone linkage in the presence of cysteine (Cys) and a cross-linker. Drug-AuNP conjugates were characterized via spectrofluorimetry, dynamic light scattering and zeta potential measurements as well as X-ray photoelectron spectroscopy. The particle size of AuNP-PMAA and AuNP-PMAA-Cys-DOX conjugate were calculated as found as 104 and 147 nm, respectively. Further experiments with different pH conditions (pH 5.3 and 7.4) also showed that AuNP-PMAA-Cys-DOX conjugate could release the DOX in a pH-sensitive way. Finally, cell culture applications with human cervix adenocarcinoma cell line (HeLa cells) demonstrated effective therapeutic impact of the final conjugate for both chemotherapy and radiation therapy by comparing free DOX and AuNP-PMAA independently. Moreover, cell imaging study was also an evidence that AuNP-PMAA-Cys-DOX could be a beneficial candidate as a diagnostic agent.
Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.
This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less
Lyubetsky, Vassily; Gershgorin, Roman; Gorbunov, Konstantin
2017-12-06
Chromosome structure is a very limited model of the genome including the information about its chromosomes such as their linear or circular organization, the order of genes on them, and the DNA strand encoding a gene. Gene lengths, nucleotide composition, and intergenic regions are ignored. Although highly incomplete, such structure can be used in many cases, e.g., to reconstruct phylogeny and evolutionary events, to identify gene synteny, regulatory elements and promoters (considering highly conserved elements), etc. Three problems are considered; all assume unequal gene content and the presence of gene paralogs. The distance problem is to determine the minimum number of operations required to transform one chromosome structure into another and the corresponding transformation itself including the identification of paralogs in two structures. We use the DCJ model which is one of the most studied combinatorial rearrangement models. Double-, sesqui-, and single-operations as well as deletion and insertion of a chromosome region are considered in the model; the single ones comprise cut and join. In the reconstruction problem, a phylogenetic tree with chromosome structures in the leaves is given. It is necessary to assign the structures to inner nodes of the tree to minimize the sum of distances between terminal structures of each edge and to identify the mutual paralogs in a fairly large set of structures. A linear algorithm is known for the distance problem without paralogs, while the presence of paralogs makes it NP-hard. If paralogs are allowed but the insertion and deletion operations are missing (and special constraints are imposed), the reduction of the distance problem to integer linear programming is known. Apparently, the reconstruction problem is NP-hard even in the absence of paralogs. The problem of contigs is to find the optimal arrangements for each given set of contigs, which also includes the mutual identification of paralogs. We proved that these problems can be reduced to integer linear programming formulations, which allows an algorithm to redefine the problems to implement a very special case of the integer linear programming tool. The results were tested on synthetic and biological samples. Three well-known problems were reduced to a very special case of integer linear programming, which is a new method of their solutions. Integer linear programming is clearly among the main computational methods and, as generally accepted, is fast on average; in particular, computation systems specifically targeted at it are available. The challenges are to reduce the size of the corresponding integer linear programming formulations and to incorporate a more detailed biological concept in our model of the reconstruction.
Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications
2015-06-24
WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Arizona State University School of Mathematical & Statistical Sciences 901 S...SUPPLEMENTARY NOTES 14. ABSTRACT The major goals of this project were completed: the exact solution of previously unsolved challenging combinatorial optimization... combinatorial optimization problem, the Directional Sensor Problem, was solved in two ways. First, heuristically in an engineering fashion and second, exactly
TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization
2016-11-28
objective 9 4.6 On The Recoverable Robust Traveling Salesman Problem . . . . . 11 4.7 A Bicriteria Approach to Robust Optimization...be found. 4.6 On The Recoverable Robust Traveling Salesman Problem The traveling salesman problem (TSP) is a well-known combinatorial optimiza- tion...procedure for the robust traveling salesman problem . While this iterative algorithms results in an optimal solution to the robust TSP, computation
Bifurcation-based adiabatic quantum computation with a nonlinear oscillator network
Goto, Hayato
2016-01-01
The dynamics of nonlinear systems qualitatively change depending on their parameters, which is called bifurcation. A quantum-mechanical nonlinear oscillator can yield a quantum superposition of two oscillation states, known as a Schrödinger cat state, via quantum adiabatic evolution through its bifurcation point. Here we propose a quantum computer comprising such quantum nonlinear oscillators, instead of quantum bits, to solve hard combinatorial optimization problems. The nonlinear oscillator network finds optimal solutions via quantum adiabatic evolution, where nonlinear terms are increased slowly, in contrast to conventional adiabatic quantum computation or quantum annealing, where quantum fluctuation terms are decreased slowly. As a result of numerical simulations, it is concluded that quantum superposition and quantum fluctuation work effectively to find optimal solutions. It is also notable that the present computer is analogous to neural computers, which are also networks of nonlinear components. Thus, the present scheme will open new possibilities for quantum computation, nonlinear science, and artificial intelligence. PMID:26899997
NASA Astrophysics Data System (ADS)
Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III
2018-04-01
NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
Chang, Yuchao; Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Yuan, Baoqing Li andXiaobing
2017-07-19
Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum-minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms.
Thread Graphs, Linear Rank-Width and Their Algorithmic Applications
NASA Astrophysics Data System (ADS)
Ganian, Robert
The introduction of tree-width by Robertson and Seymour [7] was a breakthrough in the design of graph algorithms. A lot of research since then has focused on obtaining a width measure which would be more general and still allowed efficient algorithms for a wide range of NP-hard problems on graphs of bounded width. To this end, Oum and Seymour have proposed rank-width, which allows the solution of many such hard problems on a less restricted graph classes (see e.g. [3,4]). But what about problems which are NP-hard even on graphs of bounded tree-width or even on trees? The parameter used most often for these exceptionally hard problems is path-width, however it is extremely restrictive - for example the graphs of path-width 1 are exactly paths.
Robust optimization with transiently chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Sumi, R.; Molnár, B.; Ercsey-Ravasz, M.
2014-05-01
Efficiently solving hard optimization problems has been a strong motivation for progress in analog computing. In a recent study we presented a continuous-time dynamical system for solving the NP-complete Boolean satisfiability (SAT) problem, with a one-to-one correspondence between its stable attractors and the SAT solutions. While physical implementations could offer great efficiency, the transiently chaotic dynamics raises the question of operability in the presence of noise, unavoidable on analog devices. Here we show that the probability of finding solutions is robust to noise intensities well above those present on real hardware. We also developed a cellular neural network model realizable with analog circuits, which tolerates even larger noise intensities. These methods represent an opportunity for robust and efficient physical implementations.
Smell Detection Agent Based Optimization Algorithm
NASA Astrophysics Data System (ADS)
Vinod Chandra, S. S.
2016-09-01
In this paper, a novel nature-inspired optimization algorithm has been employed and the trained behaviour of dogs in detecting smell trails is adapted into computational agents for problem solving. The algorithm involves creation of a surface with smell trails and subsequent iteration of the agents in resolving a path. This algorithm can be applied in different computational constraints that incorporate path-based problems. Implementation of the algorithm can be treated as a shortest path problem for a variety of datasets. The simulated agents have been used to evolve the shortest path between two nodes in a graph. This algorithm is useful to solve NP-hard problems that are related to path discovery. This algorithm is also useful to solve many practical optimization problems. The extensive derivation of the algorithm can be enabled to solve shortest path problems.
Sparse signals recovered by non-convex penalty in quasi-linear systems.
Cui, Angang; Li, Haiyang; Wen, Meng; Peng, Jigen
2018-01-01
The goal of compressed sensing is to reconstruct a sparse signal under a few linear measurements far less than the dimension of the ambient space of the signal. However, many real-life applications in physics and biomedical sciences carry some strongly nonlinear structures, and the linear model is no longer suitable. Compared with the compressed sensing under the linear circumstance, this nonlinear compressed sensing is much more difficult, in fact also NP-hard, combinatorial problem, because of the discrete and discontinuous nature of the [Formula: see text]-norm and the nonlinearity. In order to get a convenience for sparse signal recovery, we set the nonlinear models have a smooth quasi-linear nature in this paper, and study a non-convex fraction function [Formula: see text] in this quasi-linear compressed sensing. We propose an iterative fraction thresholding algorithm to solve the regularization problem [Formula: see text] for all [Formula: see text]. With the change of parameter [Formula: see text], our algorithm could get a promising result, which is one of the advantages for our algorithm compared with some state-of-art algorithms. Numerical experiments show that our method performs much better than some state-of-the-art methods.
LateBiclustering: Efficient Heuristic Algorithm for Time-Lagged Bicluster Identification.
Gonçalves, Joana P; Madeira, Sara C
2014-01-01
Identifying patterns in temporal data is key to uncover meaningful relationships in diverse domains, from stock trading to social interactions. Also of great interest are clinical and biological applications, namely monitoring patient response to treatment or characterizing activity at the molecular level. In biology, researchers seek to gain insight into gene functions and dynamics of biological processes, as well as potential perturbations of these leading to disease, through the study of patterns emerging from gene expression time series. Clustering can group genes exhibiting similar expression profiles, but focuses on global patterns denoting rather broad, unspecific responses. Biclustering reveals local patterns, which more naturally capture the intricate collaboration between biological players, particularly under a temporal setting. Despite the general biclustering formulation being NP-hard, considering specific properties of time series has led to efficient solutions for the discovery of temporally aligned patterns. Notably, the identification of biclusters with time-lagged patterns, suggestive of transcriptional cascades, remains a challenge due to the combinatorial explosion of delayed occurrences. Herein, we propose LateBiclustering, a sensible heuristic algorithm enabling a polynomial rather than exponential time solution for the problem. We show that it identifies meaningful time-lagged biclusters relevant to the response of Saccharomyces cerevisiae to heat stress.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gurvits, L.
2002-01-01
Classical matching theory can be defined in terms of matrices with nonnegative entries. The notion of Positive operator, central in Quantum Theory, is a natural generalization of matrices with non-negative entries. Based on this point of view, we introduce a definition of perfect Quantum (operator) matching. We show that the new notion inherits many 'classical' properties, but not all of them. This new notion goes somewhere beyound matroids. For separable bipartite quantum states this new notion coinsides with the full rank property of the intersection of two corresponding geometric matroids. In the classical situation, permanents are naturally associated with perfectsmore » matchings. We introduce an analog of permanents for positive operators, called Quantum Permanent and show how this generalization of the permanent is related to the Quantum Entanglement. Besides many other things, Quantum Permanents provide new rational inequalities necessary for the separability of bipartite quantum states. Using Quantum Permanents, we give deterministic poly-time algorithm to solve Hidden Matroids Intersection Problem and indicate some 'classical' complexity difficulties associated with the Quantum Entanglement. Finally, we prove that the weak membership problem for the convex set of separable bipartite density matrices is NP-HARD.« less
Human Performance on the Traveling Salesman and Related Problems: A Review
ERIC Educational Resources Information Center
MacGregor, James N.; Chu, Yun
2011-01-01
The article provides a review of recent research on human performance on the traveling salesman problem (TSP) and related combinatorial optimization problems. We discuss what combinatorial optimization problems are, why they are important, and why they may be of interest to cognitive scientists. We next describe the main characteristics of human…
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich; Stahl, Stephanie
2008-01-01
Dynamic programming methods for matrix permutation problems in combinatorial data analysis can produce globally-optimal solutions for matrices up to size 30x30, but are computationally infeasible for larger matrices because of enormous computer memory requirements. Branch-and-bound methods also guarantee globally-optimal solutions, but computation…
Osaba, E; Carballedo, R; Diaz, F; Onieva, E; de la Iglesia, I; Perallos, A
2014-01-01
Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test.
Osaba, E.; Carballedo, R.; Diaz, F.; Onieva, E.; de la Iglesia, I.; Perallos, A.
2014-01-01
Since their first formulation, genetic algorithms (GAs) have been one of the most widely used techniques to solve combinatorial optimization problems. The basic structure of the GAs is known by the scientific community, and thanks to their easy application and good performance, GAs are the focus of a lot of research works annually. Although throughout history there have been many studies analyzing various concepts of GAs, in the literature there are few studies that analyze objectively the influence of using blind crossover operators for combinatorial optimization problems. For this reason, in this paper a deep study on the influence of using them is conducted. The study is based on a comparison of nine techniques applied to four well-known combinatorial optimization problems. Six of the techniques are GAs with different configurations, and the remaining three are evolutionary algorithms that focus exclusively on the mutation process. Finally, to perform a reliable comparison of these results, a statistical study of them is made, performing the normal distribution z-test. PMID:25165731
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2016-07-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
Wang, Lipo; Li, Sa; Tian, Fuyu; Fu, Xiuju
2004-10-01
Recently Chen and Aihara have demonstrated both experimentally and mathematically that their chaotic simulated annealing (CSA) has better search ability for solving combinatorial optimization problems compared to both the Hopfield-Tank approach and stochastic simulated annealing (SSA). However, CSA may not find a globally optimal solution no matter how slowly annealing is carried out, because the chaotic dynamics are completely deterministic. In contrast, SSA tends to settle down to a global optimum if the temperature is reduced sufficiently slowly. Here we combine the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network. We show the effectiveness of this new approach with two difficult combinatorial optimization problems, i.e., a traveling salesman problem and a channel assignment problem for cellular mobile communications.
NASA Astrophysics Data System (ADS)
Kumar, Vijay M.; Murthy, ANN; Chandrashekara, K.
2012-05-01
The production planning problem of flexible manufacturing system (FMS) concerns with decisions that have to be made before an FMS begins to produce parts according to a given production plan during an upcoming planning horizon. The main aspect of production planning deals with machine loading problem in which selection of a subset of jobs to be manufactured and assignment of their operations to the relevant machines are made. Such problems are not only combinatorial optimization problems, but also happen to be non-deterministic polynomial-time-hard, making it difficult to obtain satisfactory solutions using traditional optimization techniques. In this paper, an attempt has been made to address the machine loading problem with objectives of minimization of system unbalance and maximization of throughput simultaneously while satisfying the system constraints related to available machining time and tool slot designing and using a meta-hybrid heuristic technique based on genetic algorithm and particle swarm optimization. The results reported in this paper demonstrate the model efficiency and examine the performance of the system with respect to measures such as throughput and system utilization.
Aerospace applications of integer and combinatorial optimization
NASA Technical Reports Server (NTRS)
Padula, S. L.; Kincaid, R. K.
1995-01-01
Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in solving combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem, for example, seeks the optimal locations for vibration-damping devices on a large space structure and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.
Structure-based design of combinatorial mutagenesis libraries
Verma, Deeptak; Grigoryan, Gevorg; Bailey-Kellogg, Chris
2015-01-01
The development of protein variants with improved properties (thermostability, binding affinity, catalytic activity, etc.) has greatly benefited from the application of high-throughput screens evaluating large, diverse combinatorial libraries. At the same time, since only a very limited portion of sequence space can be experimentally constructed and tested, an attractive possibility is to use computational protein design to focus libraries on a productive portion of the space. We present a general-purpose method, called “Structure-based Optimization of Combinatorial Mutagenesis” (SOCoM), which can optimize arbitrarily large combinatorial mutagenesis libraries directly based on structural energies of their constituents. SOCoM chooses both positions and substitutions, employing a combinatorial optimization framework based on library-averaged energy potentials in order to avoid explicitly modeling every variant in every possible library. In case study applications to green fluorescent protein, β-lactamase, and lipase A, SOCoM optimizes relatively small, focused libraries whose variants achieve energies comparable to or better than previous library design efforts, as well as larger libraries (previously not designable by structure-based methods) whose variants cover greater diversity while still maintaining substantially better energies than would be achieved by representative random library approaches. By allowing the creation of large-scale combinatorial libraries based on structural calculations, SOCoM promises to increase the scope of applicability of computational protein design and improve the hit rate of discovering beneficial variants. While designs presented here focus on variant stability (predicted by total energy), SOCoM can readily incorporate other structure-based assessments, such as the energy gap between alternative conformational or bound states. PMID:25611189
Structure-based design of combinatorial mutagenesis libraries.
Verma, Deeptak; Grigoryan, Gevorg; Bailey-Kellogg, Chris
2015-05-01
The development of protein variants with improved properties (thermostability, binding affinity, catalytic activity, etc.) has greatly benefited from the application of high-throughput screens evaluating large, diverse combinatorial libraries. At the same time, since only a very limited portion of sequence space can be experimentally constructed and tested, an attractive possibility is to use computational protein design to focus libraries on a productive portion of the space. We present a general-purpose method, called "Structure-based Optimization of Combinatorial Mutagenesis" (SOCoM), which can optimize arbitrarily large combinatorial mutagenesis libraries directly based on structural energies of their constituents. SOCoM chooses both positions and substitutions, employing a combinatorial optimization framework based on library-averaged energy potentials in order to avoid explicitly modeling every variant in every possible library. In case study applications to green fluorescent protein, β-lactamase, and lipase A, SOCoM optimizes relatively small, focused libraries whose variants achieve energies comparable to or better than previous library design efforts, as well as larger libraries (previously not designable by structure-based methods) whose variants cover greater diversity while still maintaining substantially better energies than would be achieved by representative random library approaches. By allowing the creation of large-scale combinatorial libraries based on structural calculations, SOCoM promises to increase the scope of applicability of computational protein design and improve the hit rate of discovering beneficial variants. While designs presented here focus on variant stability (predicted by total energy), SOCoM can readily incorporate other structure-based assessments, such as the energy gap between alternative conformational or bound states. © 2015 The Protein Society.
Combinatorial Optimization in Project Selection Using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Dewi, Sari; Sawaluddin
2018-01-01
This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.
NASA Astrophysics Data System (ADS)
Hartmann, Alexander K.; Weigt, Martin
2005-10-01
A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.
Fitness Probability Distribution of Bit-Flip Mutation.
Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique
2015-01-01
Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.
Sequential Quadratic Programming Algorithms for Optimization
1989-08-01
quadratic program- ma ng (SQ(2l ) aIiatain.seenis to be relgarded aIs tie( buest choice for the solution of smiall. dlense problema (see S tour L)toS...For the step along d, note that a < nOing + 3 szH + i3.ninA A a K f~Iz,;nd and from Id1 _< ,,, we must have that for some /3 , np , 11P11 < dn"p. 5.2...Nevertheless, many of these problems are considered hard to solve. Moreover, for some of these problems the assumptions made in Chapter 2 to establish the
Numerical taxonomy on data: Experimental results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cohen, J.; Farach, M.
1997-12-01
The numerical taxonomy problems associated with most of the optimization criteria described above are NP - hard [3, 5, 1, 4]. In, the first positive result for numerical taxonomy was presented. They showed that if e is the distance to the closest tree metric under the L{sub {infinity}} norm. i.e., e = min{sub T} [L{sub {infinity}} (T-D)], then it is possible to construct a tree T such that L{sub {infinity}} (T-D) {le} 3e, that is, they gave a 3-approximation algorithm for this problem. We will refer to this algorithm as the Single Pivot (SP) heuristic.
A parallel-machine scheduling problem with two competing agents
NASA Astrophysics Data System (ADS)
Lee, Wen-Chiung; Chung, Yu-Hsiang; Wang, Jen-Ya
2017-06-01
Scheduling with two competing agents has become popular in recent years. Most of the research has focused on single-machine problems. This article considers a parallel-machine problem, the objective of which is to minimize the total completion time of jobs from the first agent given that the maximum tardiness of jobs from the second agent cannot exceed an upper bound. The NP-hardness of this problem is also examined. A genetic algorithm equipped with local search is proposed to search for the near-optimal solution. Computational experiments are conducted to evaluate the proposed genetic algorithm.
Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Li, Baoqing; Yuan, Xiaobing
2017-01-01
Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum–minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms. PMID:28753962
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning
NASA Technical Reports Server (NTRS)
Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)
2002-01-01
We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.
A neural network approach to job-shop scheduling.
Zhou, D N; Cherkassky, V; Baldwin, T R; Olson, D E
1991-01-01
A novel analog computational network is presented for solving NP-complete constraint satisfaction problems, i.e. job-shop scheduling. In contrast to most neural approaches to combinatorial optimization based on quadratic energy cost function, the authors propose to use linear cost functions. As a result, the network complexity (number of neurons and the number of resistive interconnections) grows only linearly with problem size, and large-scale implementations become possible. The proposed approach is related to the linear programming network described by D.W. Tank and J.J. Hopfield (1985), which also uses a linear cost function for a simple optimization problem. It is shown how to map a difficult constraint-satisfaction problem onto a simple neural net in which the number of neural processors equals the number of subjobs (operations) and the number of interconnections grows linearly with the total number of operations. Simulations show that the authors' approach produces better solutions than existing neural approaches to job-shop scheduling, i.e. the traveling salesman problem-type Hopfield approach and integer linear programming approach of J.P.S. Foo and Y. Takefuji (1988), in terms of the quality of the solution and the network complexity.
NASA Astrophysics Data System (ADS)
Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan
2011-11-01
The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.
Surfactant titration of nanoparticle-protein corona.
Maiolo, Daniele; Bergese, Paolo; Mahon, Eugene; Dawson, Kenneth A; Monopoli, Marco P
2014-12-16
Nanoparticles (NP), when exposed to biological fluids, are coated by specific proteins that form the so-called protein corona. While some adsorbing proteins exchange with the surroundings on a short time scale, described as a "dynamic" corona, others with higher affinity and long-lived interaction with the NP surface form a "hard" corona (HC), which is believed to mediate NP interaction with cellular machineries. In-depth NP protein corona characterization is therefore a necessary step in understanding the relationship between surface layer structure and biological outcomes. In the present work, we evaluate the protein composition and stability over time and we systematically challenge the formed complexes with surfactants. Each challenge is characterized through different physicochemical measurements (dynamic light scattering, ζ-potential, and differential centrifugal sedimentation) alongside proteomic evaluation in titration type experiments (surfactant titration). 100 nm silicon oxide (Si) and 100 nm carboxylated polystyrene (PS-COOH) NPs cloaked by human plasma HC were titrated with 3-[(3-Cholamidopropyl) dimethylammonio]-1-propanesulfonate (CHAPS, zwitterionic), Triton X-100 (nonionic), sodium dodecyl sulfate (SDS, anionic), and dodecyltrimethylammonium bromide (DTAB, cationic) surfactants. Composition and density of HC together with size and ζ-potential of NP-HC complexes were tracked at each step after surfactant titration. Results on Si NP-HC complexes showed that SDS removes most of the HC, while DTAB induces NP agglomeration. Analogous results were obtained for PS NP-HC complexes. Interestingly, CHAPS and Triton X-100, thanks to similar surface binding preferences, enable selective extraction of apolipoprotein AI (ApoAI) from Si NP hard coronas, leaving unaltered the dispersion physicochemical properties. These findings indicate that surfactant titration can enable the study of NP-HC stability through surfactant variation and also selective separation of certain proteins from the HC. This approach thus has an immediate analytical value as well as potential applications in HC engineering.
NASA Astrophysics Data System (ADS)
Vecherin, Sergey N.; Wilson, D. Keith; Pettit, Chris L.
2010-04-01
Determination of an optimal configuration (numbers, types, and locations) of a sensor network is an important practical problem. In most applications, complex signal propagation effects and inhomogeneous coverage preferences lead to an optimal solution that is highly irregular and nonintuitive. The general optimization problem can be strictly formulated as a binary linear programming problem. Due to the combinatorial nature of this problem, however, its strict solution requires significant computational resources (NP-complete class of complexity) and is unobtainable for large spatial grids of candidate sensor locations. For this reason, a greedy algorithm for approximate solution was recently introduced [S. N. Vecherin, D. K. Wilson, and C. L. Pettit, "Optimal sensor placement with terrain-based constraints and signal propagation effects," Unattended Ground, Sea, and Air Sensor Technologies and Applications XI, SPIE Proc. Vol. 7333, paper 73330S (2009)]. Here further extensions to the developed algorithm are presented to include such practical needs and constraints as sensor availability, coverage by multiple sensors, and wireless communication of the sensor information. Both communication and detection are considered in a probabilistic framework. Communication signal and signature propagation effects are taken into account when calculating probabilities of communication and detection. Comparison of approximate and strict solutions on reduced-size problems suggests that the approximate algorithm yields quick and good solutions, which thus justifies using that algorithm for full-size problems. Examples of three-dimensional outdoor sensor placement are provided using a terrain-based software analysis tool.
Synthetically lethal nanoparticles for treatment of endometrial cancer
NASA Astrophysics Data System (ADS)
Ebeid, Kareem; Meng, Xiangbing; Thiel, Kristina W.; Do, Anh-Vu; Geary, Sean M.; Morris, Angie S.; Pham, Erica L.; Wongrakpanich, Amaraporn; Chhonker, Yashpal S.; Murry, Daryl J.; Leslie, Kimberly K.; Salem, Aliasger K.
2018-01-01
Uterine serous carcinoma, one of the most aggressive types of endometrial cancer, is characterized by poor outcomes and mutations in the tumour suppressor p53. Our objective was to engender synthetic lethality to paclitaxel (PTX), the frontline treatment for endometrial cancer, in tumours with mutant p53 and enhance the therapeutic efficacy using polymeric nanoparticles (NPs). First, we identified the optimal NP formulation through comprehensive analyses of release profiles and cellular-uptake and cell viability studies. Not only were PTX-loaded NPs superior to PTX in solution, but the combination of PTX-loaded NPs with the antiangiogenic molecular inhibitor BIBF 1120 (BIBF) promoted synthetic lethality specifically in cells with the loss-of-function (LOF) p53 mutation. In a xenograft model of endometrial cancer, this combinatorial therapy resulted in a marked inhibition of tumour progression and extended survival. Together, our data provide compelling evidence for future studies of BIBF- and PTX-loaded NPs as a therapeutic opportunity for LOF p53 cancers.
Aerospace Applications of Integer and Combinatorial Optimization
NASA Technical Reports Server (NTRS)
Padula, S. L.; Kincaid, R. K.
1995-01-01
Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in formulating and solving integer and combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem, for example, seeks the optimal locations for vibration-damping devices on an orbiting platform and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.
Aerospace applications on integer and combinatorial optimization
NASA Technical Reports Server (NTRS)
Padula, S. L.; Kincaid, R. K.
1995-01-01
Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in formulating and solving integer and combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem. for example, seeks the optimal locations for vibration-damping devices on an orbiting platform and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.
Optimal matching for prostate brachytherapy seed localization with dimension reduction.
Lee, Junghoon; Labat, Christian; Jain, Ameet K; Song, Danny Y; Burdette, Everette C; Fichtinger, Gabor; Prince, Jerry L
2009-01-01
In prostate brachytherapy, x-ray fluoroscopy has been used for intra-operative dosimetry to provide qualitative assessment of implant quality. More recent developments have made possible 3D localization of the implanted radioactive seeds. This is usually modeled as an assignment problem and solved by resolving the correspondence of seeds. It is, however, NP-hard, and the problem is even harder in practice due to the significant number of hidden seeds. In this paper, we propose an algorithm that can find an optimal solution from multiple projection images with hidden seeds. It solves an equivalent problem with reduced dimensional complexity, thus allowing us to find an optimal solution in polynomial time. Simulation results show the robustness of the algorithm. It was validated on 5 phantom and 18 patient datasets, successfully localizing the seeds with detection rate of > or = 97.6% and reconstruction error of < or = 1.2 mm. This is considered to be clinically excellent performance.
Open shop scheduling problem to minimize total weighted completion time
NASA Astrophysics Data System (ADS)
Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian
2017-01-01
A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.
Methodologies in determining mechanical properties of thin films using nanoindentation
NASA Astrophysics Data System (ADS)
Han, Seung Min Jane
Thin films are critical components of microelectronic and MEMS devices, and evaluating their mechanical properties is of current interest. As the dimensions of the devices become smaller and smaller, however, understanding the mechanical properties of materials at sub-micron length scales becomes more challenging. The conventional methods for evaluating strengths of materials in bulk form cannot be applied, and new methodologies are required for accurately evaluating mechanical properties of thin films. In this work, development of methodologies using the nanoindenter was pursued in three parts: (1) creation of a new method for extracting thin film hardness, (2) use of combinatorial methods for determining compositions with desired mechanical properties, and (3) use of microcompression testing of sub-micron sized pillars to understand plasticity in Al-Sc multilayers. The existing nanoindentation hardness model by Oliver & Pharr is unable to accurately determine the hardness of thin films on substrates with an elastic mismatch. Thus, a new method of analysis for extracting thin film hardness from film/substrate systems, that eliminates the effect of elastic mismatch of the underlying substrate, surface roughness, and also pile-up/sink-in, is needed. Such a method was developed in the first part of this study. The feasibility of using the nanoindentation hardness together with combinatorial methods to efficiently scan through mechanical properties of Ti-Al metallic alloys was examined in the second part of this study. The combinatorial approach provides an efficient method that can be used to determine alloy compositions that might merit further exploration and development as bulk materials. Finally, the mechanical properties of Al-Al3Sc multilayers with bilayer periods ranging from 6-100 nm were examined using microcompression. The sub-micron sized pillars were prepared using the focused ion beam (FIB) and compression tested with the flat tip of the nanoindenter. The measured yield strengths show the trend of increasing strength with decreasing bilayer period, and agree with the nanoindentation hardness results using the suitable Tabor correction factor. Strain softening was observed at large strains, and a new model for the true stress and true strain was developed to account for the inhomogeneous deformation geometry.
Khaliq, Zeeshan; Leijon, Mikael; Belák, Sándor; Komorowski, Jan
2016-07-29
The underlying strategies used by influenza A viruses (IAVs) to adapt to new hosts while crossing the species barrier are complex and yet to be understood completely. Several studies have been published identifying singular genomic signatures that indicate such a host switch. The complexity of the problem suggested that in addition to the singular signatures, there might be a combinatorial use of such genomic features, in nature, defining adaptation to hosts. We used computational rule-based modeling to identify combinatorial sets of interacting amino acid (aa) residues in 12 proteins of IAVs of H1N1 and H3N2 subtypes. We built highly accurate rule-based models for each protein that could differentiate between viral aa sequences coming from avian and human hosts. We found 68 host-specific combinations of aa residues, potentially associated to host adaptation on HA, M1, M2, NP, NS1, NEP, PA, PA-X, PB1 and PB2 proteins of the H1N1 subtype and 24 on M1, M2, NEP, PB1 and PB2 proteins of the H3N2 subtypes. In addition to these combinations, we found 132 novel singular aa signatures distributed among all proteins, including the newly discovered PA-X protein, of both subtypes. We showed that HA, NA, NP, NS1, NEP, PA-X and PA proteins of the H1N1 subtype carry H1N1-specific and HA, NA, PA-X, PA, PB1-F2 and PB1 of the H3N2 subtype carry H3N2-specific signatures. M1, M2, PB1-F2, PB1 and PB2 of H1N1 subtype, in addition to H1N1 signatures, also carry H3N2 signatures. Similarly M1, M2, NP, NS1, NEP and PB2 of H3N2 subtype were shown to carry both H3N2 and H1N1 host-specific signatures (HSSs). To sum it up, we computationally constructed simple IF-THEN rule-based models that could distinguish between aa sequences of avian and human IAVs. From the rules we identified HSSs having a potential to affect the adaptation to specific hosts. The identification of combinatorial HSSs suggests that the process of adaptation of IAVs to a new host is more complex than previously suggested. The present study provides a basis for further detailed studies with the aim to elucidate the molecular mechanisms providing the foundation for the adaptation process.
Alternative Parameterizations for Cluster Editing
NASA Astrophysics Data System (ADS)
Komusiewicz, Christian; Uhlmann, Johannes
Given an undirected graph G and a nonnegative integer k, the NP-hard Cluster Editing problem asks whether G can be transformed into a disjoint union of cliques by applying at most k edge modifications. In the field of parameterized algorithmics, Cluster Editing has almost exclusively been studied parameterized by the solution size k. Contrastingly, in many real-world instances it can be observed that the parameter k is not really small. This observation motivates our investigation of parameterizations of Cluster Editing different from the solution size k. Our results are as follows. Cluster Editing is fixed-parameter tractable with respect to the parameter "size of a minimum cluster vertex deletion set of G", a typically much smaller parameter than k. Cluster Editing remains NP-hard on graphs with maximum degree six. A restricted but practically relevant version of Cluster Editing is fixed-parameter tractable with respect to the combined parameter "number of clusters in the target graph" and "maximum number of modified edges incident to any vertex in G". Many of our results also transfer to the NP-hard Cluster Deletion problem, where only edge deletions are allowed.
Liu, Jing; Meng, Guowen; Li, Zhongbo; Huang, Zhulin; Li, Xiangdong
2015-11-21
Surface-enhanced Raman scattering (SERS) is considered to be an excellent candidate for analytical detection schemes, because of its molecular specificity, rapid response and high sensitivity. Here, SERS-substrates of Ag-nanoparticle (Ag-NP) decorated Ge-nanotapers grafted on hexagonally ordered Si-micropillar (denoted as Ag-NP@Ge-nanotaper/Si-micropillar) arrays are fabricated via a combinatorial process of two-step etching to achieve hexagonal Si-micropillar arrays, chemical vapor deposition of flocky Ge-nanotapers on each Si-micropillar and decoration of Ag-NPs onto the Ge-nanotapers through galvanic displacement. With high density three-dimensional (3D) "hot spots" created from the large quantities of the neighboring Ag-NPs and large-scale uniform morphology, the hierarchical Ag-NP@Ge-nanotaper/Si-micropillar arrays exhibit strong and reproducible SERS activity. Using our hierarchical 3D SERS-substrates, both methyl parathion (a commonly used pesticide) and PCB-2 (one congener of highly toxic polychlorinated biphenyls) with concentrations down to 10(-7) M and 10(-5) M have been detected respectively, showing great potential in SERS-based rapid trace-level detection of toxic organic pollutants in the environment.
Haplotyping for disease association: a combinatorial approach.
Lancia, Giuseppe; Ravi, R; Rizzi, Romeo
2008-01-01
We consider a combinatorial problem derived from haplotyping a population with respect to a genetic disease, either recessive or dominant. Given a set of individuals, partitioned into healthy and diseased, and the corresponding sets of genotypes, we want to infer "bad'' and "good'' haplotypes to account for these genotypes and for the disease. Assume e.g. the disease is recessive. Then, the resolving haplotypes must consist of bad and good haplotypes, so that (i) each genotype belonging to a diseased individual is explained by a pair of bad haplotypes and (ii) each genotype belonging to a healthy individual is explained by a pair of haplotypes of which at least one is good. We prove that the associated decision problem is NP-complete. However, we also prove that there is a simple solution, provided the data satisfy a very weak requirement.
Learning from Bees: An Approach for Influence Maximization on Viral Campaigns
Sankar, C. Prem; S., Asharaf
2016-01-01
Maximisation of influence propagation is a key ingredient to any viral marketing or socio-political campaigns. However, it is an NP-hard problem, and various approximate algorithms have been suggested to address the issue, though not largely successful. In this paper, we propose a bio-inspired approach to select the initial set of nodes which is significant in rapid convergence towards a sub-optimal solution in minimal runtime. The performance of the algorithm is evaluated using the re-tweet network of the hashtag #KissofLove on Twitter associated with the non-violent protest against the moral policing spread to many parts of India. Comparison with existing centrality based node ranking process the proposed method significant improvement on influence propagation. The proposed algorithm is one of the hardly few bio-inspired algorithms in network theory. We also report the results of the exploratory analysis of the network kiss of love campaign. PMID:27992472
Liu, Zhi-Hua; Xie, Shangxian; Lin, Furong; Jin, Mingjie; Yuan, Joshua S
2018-01-01
Lignin valorization has recently been considered to be an essential process for sustainable and cost-effective biorefineries. Lignin represents a potential new feedstock for value-added products. Oleaginous bacteria such as Rhodococcus opacus can produce intracellular lipids from biodegradation of aromatic substrates. These lipids can be used for biofuel production, which can potentially replace petroleum-derived chemicals. However, the low reactivity of lignin produced from pretreatment and the underdeveloped fermentation technology hindered lignin bioconversion to lipids. In this study, combinatorial pretreatment with an optimized fermentation strategy was evaluated to improve lignin valorization into lipids using R. opacus PD630. As opposed to single pretreatment, combinatorial pretreatment produced a 12.8-75.6% higher lipid concentration in fermentation using lignin as the carbon source. Gas chromatography-mass spectrometry analysis showed that combinatorial pretreatment released more aromatic monomers, which could be more readily utilized by lignin-degrading strains. Three detoxification strategies were used to remove potential inhibitors produced from pretreatment. After heating detoxification of the lignin stream, the lipid concentration further increased by 2.9-9.7%. Different fermentation strategies were evaluated in scale-up lipid fermentation using a 2.0-l fermenter. With laccase treatment of the lignin stream produced from combinatorial pretreatment, the highest cell dry weight and lipid concentration were 10.1 and 1.83 g/l, respectively, in fed-batch fermentation, with a total soluble substrate concentration of 40 g/l. The improvement of the lipid fermentation performance may have resulted from lignin depolymerization by the combinatorial pretreatment and laccase treatment, reduced inhibition effects by fed-batch fermentation, adequate oxygen supply, and an accurate pH control in the fermenter. Overall, these results demonstrate that combinatorial pretreatment, together with fermentation optimization, favorably improves lipid production using lignin as the carbon source. Combinatorial pretreatment integrated with fed-batch fermentation was an effective strategy to improve the bioconversion of lignin into lipids, thus facilitating lignin valorization in biorefineries.
Fast Solution in Sparse LDA for Binary Classification
NASA Technical Reports Server (NTRS)
Moghaddam, Baback
2010-01-01
An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable- selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bio-informatics. Because of its combinatorial nature, feature- or variable-selection problems are NP-hard or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms. The prior Sparse-LDA algorithm was a greedy algorithm that considered the best variable or feature to add/ delete to/ from its subsets in order to maximally discriminate between multiple classes of data. The present algorithm is designed for the special but prevalent case of 2-class or binary classification (e.g. 1 vs. 0, functioning vs. malfunctioning, or change versus no change). The present algorithm provides near-optimal solutions on large real-world datasets having hundreds or even thousands of variables or features (e.g. selecting the fewest wavelength bands in a hyperspectral sensor to do terrain classification) and does so in typical computation times of minutes as compared to days or weeks as taken by the prior art. Sparse LDA requires solving generalized eigenvalue problems for a large number of variable subsets (represented by the submatrices of the input within-class and between-class covariance matrices). In the general (fullrank) case, the amount of computation scales at least cubically with the number of variables and thus the size of the problems that can be solved is limited accordingly. However, in binary classification, the principal eigenvalues can be found using a special analytic formula, without resorting to costly iterative techniques. The present algorithm exploits this analytic form along with the inherent sequential nature of greedy search itself. Together this enables the use of highly-efficient partitioned-matrix-inverse techniques that result in large speedups of computation in both the forward-selection and backward-elimination stages of greedy algorithms in general.
A path following algorithm for the graph matching problem.
Zaslavskiy, Mikhail; Bach, Francis; Vert, Jean-Philippe
2009-12-01
We propose a convex-concave programming approach for the labeled weighted graph matching problem. The convex-concave programming formulation is obtained by rewriting the weighted graph matching problem as a least-square problem on the set of permutation matrices and relaxing it to two different optimization problems: a quadratic convex and a quadratic concave optimization problem on the set of doubly stochastic matrices. The concave relaxation has the same global minimum as the initial graph matching problem, but the search for its global minimum is also a hard combinatorial problem. We, therefore, construct an approximation of the concave problem solution by following a solution path of a convex-concave problem obtained by linear interpolation of the convex and concave formulations, starting from the convex relaxation. This method allows to easily integrate the information on graph label similarities into the optimization problem, and therefore, perform labeled weighted graph matching. The algorithm is compared with some of the best performing graph matching methods on four data sets: simulated graphs, QAPLib, retina vessel images, and handwritten Chinese characters. In all cases, the results are competitive with the state of the art.
Site Partitioning for Redundant Arrays of Distributed Disks
NASA Technical Reports Server (NTRS)
Mourad, Antoine N.; Fuchs, W. Kent; Saab, Daniel G.
1996-01-01
Redundant arrays of distributed disks (RADD) can be used in a distributed computing system or database system to provide recovery in the presence of disk crashes and temporary and permanent failures of single sites. In this paper, we look at the problem of partitioning the sites of a distributed storage system into redundant arrays in such a way that the communication costs for maintaining the parity information are minimized. We show that the partitioning problem is NP-hard. We then propose and evaluate several heuristic algorithms for finding approximate solutions. Simulation results show that significant reduction in remote parity update costs can be achieved by optimizing the site partitioning scheme.
Antolini, Ermete
2017-02-13
Combinatorial chemistry and high-throughput screening represent an innovative and rapid tool to prepare and evaluate a large number of new materials, saving time and expense for research and development. Considering that the activity and selectivity of catalysts depend on complex kinetic phenomena, making their development largely empirical in practice, they are prime candidates for combinatorial discovery and optimization. This review presents an overview of recent results of combinatorial screening of low-temperature fuel cell electrocatalysts for methanol oxidation. Optimum catalyst compositions obtained by combinatorial screening were compared with those of bulk catalysts, and the effect of the library geometry on the screening of catalyst composition is highlighted.
Optimal and heuristic algorithms of planning of low-rise residential buildings
NASA Astrophysics Data System (ADS)
Kartak, V. M.; Marchenko, A. A.; Petunin, A. A.; Sesekin, A. N.; Fabarisova, A. I.
2017-10-01
The problem of the optimal layout of low-rise residential building is considered. Each apartment must be no less than the corresponding apartment from the proposed list. Also all requests must be made and excess of the total square over of the total square of apartment from the list must be minimized. The difference in the squares formed due to with the discreteness of distances between bearing walls and a number of other technological limitations. It shown, that this problem is NP-hard. The authors built a linear-integer model and conducted her qualitative analysis. As well, authors developed a heuristic algorithm for the solution tasks of a high dimension. The computational experiment was conducted which confirming the efficiency of the proposed approach. Practical recommendations on the use the proposed algorithms are given.
NASA Astrophysics Data System (ADS)
Santosa, B.; Siswanto, N.; Fiqihesa
2018-04-01
This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution
Improved multi-objective ant colony optimization algorithm and its application in complex reasoning
NASA Astrophysics Data System (ADS)
Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing
2013-09-01
The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.
Optimization of Highway Work Zone Decisions Considering Short-Term and Long-Term Impacts
2010-01-01
strategies which can minimize the one-time work zone cost. Considering the complex and combinatorial nature of this optimization problem, a heuristic...combination of lane closure and traffic control strategies which can minimize the one-time work zone cost. Considering the complex and combinatorial nature ...zone) NV # the number of vehicle classes NPV $ Net Present Value p’(t) % Adjusted traffic diversion rate at time t p(t) % Natural diversion rate
Many-to-Many Multicast Routing Schemes under a Fixed Topology
Ding, Wei; Wang, Hongfa; Wei, Xuerui
2013-01-01
Many-to-many multicast routing can be extensively applied in computer or communication networks supporting various continuous multimedia applications. The paper focuses on the case where all users share a common communication channel while each user is both a sender and a receiver of messages in multicasting as well as an end user. In this case, the multicast tree appears as a terminal Steiner tree (TeST). The problem of finding a TeST with a quality-of-service (QoS) optimization is frequently NP-hard. However, we discover that it is a good idea to find a many-to-many multicast tree with QoS optimization under a fixed topology. In this paper, we are concerned with three kinds of QoS optimization objectives of multicast tree, that is, the minimum cost, minimum diameter, and maximum reliability. All of three optimization problems are distributed into two types, the centralized and decentralized version. This paper uses the dynamic programming method to devise an exact algorithm, respectively, for the centralized and decentralized versions of each optimization problem. PMID:23589706
Public channel cryptography: chaos synchronization and Hilbert's tenth problem.
Kanter, Ido; Kopelowitz, Evi; Kinzel, Wolfgang
2008-08-22
The synchronization process of two mutually delayed coupled deterministic chaotic maps is demonstrated both analytically and numerically. The synchronization is preserved when the mutually transmitted signals are concealed by two commutative private filters, a convolution of the truncated time-delayed output signals or some powers of the delayed output signals. The task of a passive attacker is mapped onto Hilbert's tenth problem, solving a set of nonlinear Diophantine equations, which was proven to be in the class of NP-complete problems [problems that are both NP (verifiable in nondeterministic polynomial time) and NP-hard (any NP problem can be translated into this problem)]. This bridge between nonlinear dynamics and NP-complete problems opens a horizon for new types of secure public-channel protocols.
NP-hardness of the cluster minimization problem revisited
NASA Astrophysics Data System (ADS)
Adib, Artur B.
2005-10-01
The computational complexity of the 'cluster minimization problem' is revisited (Wille and Vennik 1985 J. Phys. A: Math. Gen. 18 L419). It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analogue of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.
Nanoparticle-Hydrogel: A Hybrid Biomaterial System for Localized Drug Delivery
Gao, Weiwei; Zhang, Yue; Zhang, Qiangzhe; Zhang, Liangfang
2016-01-01
Nanoparticles have offered a unique set of properties for drug delivery including high drug loading capacity, combinatorial delivery, controlled and sustained drug release, prolonged stability and lifetime, and targeted delivery. To further enhance therapeutic index, especially for localized application, nanoparticles have been increasingly combined with hydrogels to form a hybrid biomaterial system for controlled drug delivery. Herein, we review recent progresses in engineering such nanoparticle-hydrogel hybrid system (namely ‘NP-gel’) with a particular focus on its application for localized drug delivery. Specifically, we highlight four research areas where NP-gel has shown great promises, including (1) passively controlled drug release, (2) stimuli-responsive drug delivery, (3) site-specific drug delivery, and (4) detoxification. Overall, integrating therapeutic nanoparticles with hydrogel technologies creates a unique and robust hybrid biomaterial system that enables effective localized drug delivery. PMID:26951462
Sobuś, Jan; Ziółek, Marcin
2014-07-21
A numerical study of optimal bandgaps of light absorbers in tandem solar cell configurations is presented with the main focus on dye-sensitized solar cells (DSSCs) and perovskite solar cells (PSCs). The limits in efficiency and the expected improvements of tandem structures are investigated as a function of total loss-in-potential (V(L)), incident photon to current efficiency (IPCE) and fill factor (FF) of individual components. It is shown that the optimal absorption onsets are significantly smaller than those derived for multi-junction devices. For example, for double-cell devices the onsets are at around 660 nm and 930 nm for DSSCs with iodide based electrolytes and at around 720 nm and 1100 nm for both DSSCs with cobalt based electrolytes and PSCs. Such configurations can increase the total sunlight conversion efficiency by about 35% in comparison to single-cell devices of the same VL, IPCE and FF. The relevance of such studies for tandem n-p DSSCs and for a proposed new configuration for PSCs is discussed. In particular, it is shown that maximum total losses of 1.7 V for DSSCs and 1.4 V for tandem PSCs are necessary to give any efficiency improvement with respect to the single bandgap device. This means, for example, a tandem n-p DSSC with TiO2 and NiO porous electrodes will hardly work better than the champion single DSSC. A source code of the program used for calculations is also provided.
Solving multiconstraint assignment problems using learning automata.
Horn, Geir; Oommen, B John
2010-02-01
This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems.
Joseph B. Roise; Joosang Chung; Chris B. LeDoux
1988-01-01
Nonlinear programming (NP) is applied to the problem of finding optimal thinning and harvest regimes simultaneously with species mix and diameter class distribution. Optimal results for given cases are reported. Results of the NP optimization are compared with prescriptions developed by Appalachian hardwood silviculturists.
NASA Astrophysics Data System (ADS)
Chandra, Rishabh
Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.
Stoiber, Tasha; Croteau, Marie-Noële; Römer, Isabella; Tejamaya, Mila; Lead, Jamie R; Luoma, Samuel N
2015-01-01
The release of Ag nanoparticles (AgNPs) into the aquatic environment is likely, but the influence of water chemistry on their impacts and fate remains unclear. Here, we characterize the bioavailability of Ag from AgNO(3) and from AgNPs capped with polyvinylpyrrolidone (PVP AgNP) and thiolated polyethylene glycol (PEG AgNP) in the freshwater snail, Lymnaea stagnalis, after short waterborne exposures. Results showed that water hardness, AgNP capping agents, and metal speciation affected the uptake rate of Ag from AgNPs. Comparison of the results from organisms of similar weight showed that water hardness affected the uptake of Ag from AgNPs, but not that from AgNO(3). Transformation (dissolution and aggregation) of the AgNPs was also influenced by water hardness and the capping agent. Bioavailability of Ag from AgNPs was, in turn, correlated to these physical changes. Water hardness increased the aggregation of AgNPs, especially for PEG AgNPs, reducing the bioavailability of Ag from PEG AgNPs to a greater degree than from PVP AgNPs. Higher dissolved Ag concentrations were measured for the PVP AgNPs (15%) compared to PEG AgNPs (3%) in moderately hard water, enhancing Ag bioavailability of the former. Multiple drivers of bioavailability yielded differences in Ag influx between very hard and deionized water where the uptake rate constants (k(uw), l g(-1) d(-1) ± SE) varied from 3.1 ± 0.7 to 0.2 ± 0.01 for PEG AgNPs and from 2.3 ± 0.02 to 1.3 ± 0.01 for PVP AgNPs. Modeling bioavailability of Ag from NPs revealed that Ag influx into L. stagnalis comprised uptake from the NPs themselves and from newly dissolved Ag.
Stoiber, Tasha L.; Croteau, Marie-Noele; Romer, Isabella; Tejamaya, Mila; Lead, Jamie R.; Luoma, Samuel N.
2015-01-01
The release of Ag nanoparticles (AgNPs) into the aquatic environment is likely, but the influence of water chemistry on their impacts and fate remains unclear. Here, we characterize the bioavailability of Ag from AgNO3 and from AgNPs capped with polyvinylpyrrolidone (PVP AgNP) and thiolated polyethylene glycol (PEG AgNP) in the freshwater snail, Lymnaea stagnalis, after short waterborne exposures. Results showed that water hardness, AgNP capping agents, and metal speciation affected the uptake rate of Ag from AgNPs. Comparison of the results from organisms of similar weight showed that water hardness affected the uptake of Ag from AgNPs, but not that from AgNO3. Transformation (dissolution and aggregation) of the AgNPs was also influenced by water hardness and the capping agent. Bioavailability of Ag from AgNPs was, in turn, correlated to these physical changes. Water hardness increased the aggregation of AgNPs, especially for PEG AgNPs, reducing the bioavailability of Ag from PEG AgNPs to a greater degree than from PVP AgNPs. Higher dissolved Ag concentrations were measured for the PVP AgNPs (15%) compared to PEG AgNPs (3%) in moderately hard water, enhancing Ag bioavailability of the former. Multiple drivers of bioavailability yielded differences in Ag influx between very hard and deionized water where the uptake rate constants (kuw, l g-1 d-1 ± SE) varied from 3.1 ± 0.7 to 0.2 ± 0.01 for PEG AgNPs and from 2.3 ± 0.02 to 1.3 ± 0.01 for PVP AgNPs. Modeling bioavailability of Ag from NPs revealed that Ag influx into L. stagnalis comprised uptake from the NPs themselves and from newly dissolved Ag.
NASA Astrophysics Data System (ADS)
Mishra, S. K.; Sahithi, V. V. D.; Rao, C. S. P.
2016-09-01
The lot sizing problem deals with finding optimal order quantities which minimizes the ordering and holding cost of product mix. when multiple items at multiple levels with all capacity restrictions are considered, the lot sizing problem become NP hard. Many heuristics were developed in the past have inevitably failed due to size, computational complexity and time. However the authors were successful in the development of PSO based technique namely iterative improvement binary particles swarm technique to address very large capacitated multi-item multi level lot sizing (CMIMLLS) problem. First binary particle Swarm Optimization algorithm is used to find a solution in a reasonable time and iterative improvement local search mechanism is employed to improvise the solution obtained by BPSO algorithm. This hybrid mechanism of using local search on the global solution is found to improve the quality of solutions with respect to time thus IIBPSO method is found best and show excellent results.
Study on probability distributions for evolution in modified extremal optimization
NASA Astrophysics Data System (ADS)
Zeng, Guo-Qiang; Lu, Yong-Zai; Mao, Wei-Jie; Chu, Jian
2010-05-01
It is widely believed that the power-law is a proper probability distribution being effectively applied for evolution in τ-EO (extremal optimization), a general-purpose stochastic local-search approach inspired by self-organized criticality, and its applications in some NP-hard problems, e.g., graph partitioning, graph coloring, spin glass, etc. In this study, we discover that the exponential distributions or hybrid ones (e.g., power-laws with exponential cutoff) being popularly used in the research of network sciences may replace the original power-laws in a modified τ-EO method called self-organized algorithm (SOA), and provide better performances than other statistical physics oriented methods, such as simulated annealing, τ-EO and SOA etc., from the experimental results on random Euclidean traveling salesman problems (TSP) and non-uniform instances. From the perspective of optimization, our results appear to demonstrate that the power-law is not the only proper probability distribution for evolution in EO-similar methods at least for TSP, the exponential and hybrid distributions may be other choices.
Mulder, Samuel A; Wunsch, Donald C
2003-01-01
The Traveling Salesman Problem (TSP) is a very hard optimization problem in the field of operations research. It has been shown to be NP-complete, and is an often-used benchmark for new optimization techniques. One of the main challenges with this problem is that standard, non-AI heuristic approaches such as the Lin-Kernighan algorithm (LK) and the chained LK variant are currently very effective and in wide use for the common fully connected, Euclidean variant that is considered here. This paper presents an algorithm that uses adaptive resonance theory (ART) in combination with a variation of the Lin-Kernighan local optimization algorithm to solve very large instances of the TSP. The primary advantage of this algorithm over traditional LK and chained-LK approaches is the increased scalability and parallelism allowed by the divide-and-conquer clustering paradigm. Tours obtained by the algorithm are lower quality, but scaling is much better and there is a high potential for increasing performance using parallel hardware.
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
A Generalization of SAT and #SAT for Robust Policy Evaluation
2014-06-30
PNP [1]. Corollary 1. PPNP [1] ⊆ NP PP Proof. Follows from Toda’s theorem [ Toda , 1991] (middle inclusion): PPNP [1] ⊆ PP PH ⊆ P PP ⊆ NP PP . 8 This...dynamic programming for first-order POMDPs. In AAAI, 2010. S. Toda . PP is as hard as the polynomial-time hierarchy. SIAM Journal on Computing, 20:865
Connectivity Restoration in Wireless Sensor Networks via Space Network Coding.
Uwitonze, Alfred; Huang, Jiaqing; Ye, Yuanqing; Cheng, Wenqing
2017-04-20
The problem of finding the number and optimal positions of relay nodes for restoring the network connectivity in partitioned Wireless Sensor Networks (WSNs) is Non-deterministic Polynomial-time hard (NP-hard) and thus heuristic methods are preferred to solve it. This paper proposes a novel polynomial time heuristic algorithm, namely, Relay Placement using Space Network Coding (RPSNC), to solve this problem, where Space Network Coding, also called Space Information Flow (SIF), is a new research paradigm that studies network coding in Euclidean space, in which extra relay nodes can be introduced to reduce the cost of communication. Unlike contemporary schemes that are often based on Minimum Spanning Tree (MST), Euclidean Steiner Minimal Tree (ESMT) or a combination of MST with ESMT, RPSNC is a new min-cost multicast space network coding approach that combines Delaunay triangulation and non-uniform partitioning techniques for generating a number of candidate relay nodes, and then linear programming is applied for choosing the optimal relay nodes and computing their connection links with terminals. Subsequently, an equilibrium method is used to refine the locations of the optimal relay nodes, by moving them to balanced positions. RPSNC can adapt to any density distribution of relay nodes and terminals, as well as any density distribution of terminals. The performance and complexity of RPSNC are analyzed and its performance is validated through simulation experiments.
NP-hardness of decoding quantum error-correction codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Le Gall, François
2011-05-01
Although the theory of quantum error correction is intimately related to classical coding theory and, in particular, one can construct quantum error-correction codes (QECCs) from classical codes with the dual-containing property, this does not necessarily imply that the computational complexity of decoding QECCs is the same as their classical counterparts. Instead, decoding QECCs can be very much different from decoding classical codes due to the degeneracy property. Intuitively, one expects degeneracy would simplify the decoding since two different errors might not and need not be distinguished in order to correct them. However, we show that general quantum decoding problem is NP-hard regardless of the quantum codes being degenerate or nondegenerate. This finding implies that no considerably fast decoding algorithm exists for the general quantum decoding problems and suggests the existence of a quantum cryptosystem based on the hardness of decoding QECCs.
Space communications scheduler: A rule-based approach to adaptive deadline scheduling
NASA Technical Reports Server (NTRS)
Straguzzi, Nicholas
1990-01-01
Job scheduling is a deceptively complex subfield of computer science. The highly combinatorial nature of the problem, which is NP-complete in nearly all cases, requires a scheduling program to intelligently transverse an immense search tree to create the best possible schedule in a minimal amount of time. In addition, the program must continually make adjustments to the initial schedule when faced with last-minute user requests, cancellations, unexpected device failures, quests, cancellations, unexpected device failures, etc. A good scheduler must be quick, flexible, and efficient, even at the expense of generating slightly less-than-optimal schedules. The Space Communication Scheduler (SCS) is an intelligent rule-based scheduling system. SCS is an adaptive deadline scheduler which allocates modular communications resources to meet an ordered set of user-specified job requests on board the NASA Space Station. SCS uses pattern matching techniques to detect potential conflicts through algorithmic and heuristic means. As a result, the system generates and maintains high density schedules without relying heavily on backtracking or blind search techniques. SCS is suitable for many common real-world applications.
NASA Astrophysics Data System (ADS)
Ivković, Zoran; Lloyd, Errol L.
Classic bin packing seeks to pack a given set of items of possibly varying sizes into a minimum number of identical sized bins. A number of approximation algorithms have been proposed for this NP-hard problem for both the on-line and off-line cases. In this chapter we discuss fully dynamic bin packing, where items may arrive (Insert) and depart (Delete) dynamically. In accordance with standard practice for fully dynamic algorithms, it is assumed that the packing may be arbitrarily rearranged to accommodate arriving and departing items. The goal is to maintain an approximately optimal solution of provably high quality in a total amount of time comparable to that used by an off-line algorithm delivering a solution of the same quality.
Distributed Combinatorial Optimization Using Privacy on Mobile Phones
NASA Astrophysics Data System (ADS)
Ono, Satoshi; Katayama, Kimihiro; Nakayama, Shigeru
This paper proposes a method for distributed combinatorial optimization which uses mobile phones as computers. In the proposed method, an ordinary computer generates solution candidates and mobile phones evaluates them by referring privacy — private information and preferences. Users therefore does not have to send their privacy to any other computers and does not have to refrain from inputting their preferences. They therefore can obtain satisfactory solution. Experimental results have showed the proposed method solved room assignment problems without sending users' privacy to a server.
An Efficient Rank Based Approach for Closest String and Closest Substring
2012-01-01
This paper aims to present a new genetic approach that uses rank distance for solving two known NP-hard problems, and to compare rank distance with other distance measures for strings. The two NP-hard problems we are trying to solve are closest string and closest substring. For each problem we build a genetic algorithm and we describe the genetic operations involved. Both genetic algorithms use a fitness function based on rank distance. We compare our algorithms with other genetic algorithms that use different distance measures, such as Hamming distance or Levenshtein distance, on real DNA sequences. Our experiments show that the genetic algorithms based on rank distance have the best results. PMID:22675483
NASA Astrophysics Data System (ADS)
Fomin, Fedor V.
Preprocessing (data reduction or kernelization) as a strategy of coping with hard problems is universally used in almost every implementation. The history of preprocessing, like applying reduction rules simplifying truth functions, can be traced back to the 1950's [6]. A natural question in this regard is how to measure the quality of preprocessing rules proposed for a specific problem. For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this anomaly was that if we start with an instance I of an NP-hard problem and can show that in polynomial time we can replace this with an equivalent instance I' with |I'| < |I| then that would imply P=NP in classical complexity.
Combinatorial investigation of Fe–B thin-film nanocomposites
Brunken, Hayo; Grochla, Dario; Savan, Alan; Kieschnick, Michael; Meijer, Jan D; Ludwig, Alfred
2011-01-01
Combinatorial magnetron sputter deposition from elemental targets was used to create Fe–B composition spread type thin film materials libraries on thermally oxidized 4-in. Si wafers. The materials libraries consisting of wedge-type multilayer thin films were annealed at 500 or 700 °C to transform the multilayers into multiphase alloys. The libraries were characterized by nuclear reaction analysis, Rutherford backscattering, nanoindentation, vibrating sample magnetometry, x-ray diffraction (XRD) and transmission electron microscopy (TEM). Young's modulus and hardness values were related to the annealing parameters, structure and composition of the films. The magnetic properties of the films were improved by annealing in a H2 atmosphere, showing a more than tenfold decrease in the coercive field values in comparison to those of the vacuum-annealed films. The hardness values increased from 8 to 18 GPa when the annealing temperature was increased from 500 to 700 °C. The appearance of Fe2B phases, as revealed by XRD and TEM, had a significant effect on the mechanical properties of the films. PMID:27877435
Lin-Gibson, Sheng; Sung, Lipiin; Forster, Aaron M; Hu, Haiqing; Cheng, Yajun; Lin, Nancy J
2009-07-01
Multicomponent formulations coupled with complex processing conditions govern the final properties of photopolymerizable dental composites. In this study, a single test substrate was fabricated to support multiple formulations with a gradient in degree of conversion (DC), allowing the evaluation of multiple processing conditions and formulations on one specimen. Mechanical properties and damage response were evaluated as a function of filler type/content and irradiation. DC, surface roughness, modulus, hardness, scratch deformation and cytotoxicity were quantified using techniques including near-infrared spectroscopy, laser confocal scanning microscopy, depth-sensing indentation, scratch testing and cell viability. Scratch parameters (depth, width, percent recovery) were correlated to composite modulus and hardness. Total filler content, nanofiller and irradiation time/intensity all affected the final properties, with the dominant factor for improved properties being a higher DC. This combinatorial platform accelerates the screening of dental composites through the direct comparison of properties and processing conditions across the same sample.
Lee, Byung-Tae; Ranville, James F
2012-04-30
The stability and uptake by Daphnia magna of citrate-stabilized gold nanoparticles (AuNPs) in three different hardness-adjusted synthetic waters were investigated. Negatively charged AuNPs were found to aggregate and settle in synthetic waters within 24 h. Sedimentation rates depended on initial particle concentrations of 0.02, 0.04, and 0.08 nM AuNPs. Hardness of the synthetic waters affected the aggregation of AuNPs and is explained by the compression of diffuse double layer of AuNPs due to the increasing ionic strength. The fractal dimension of AuNPs in the reaction-limited regime of synthetic waters averaged 2.228±0.126 implying the rigid structures of aggregates driven by the collision of small particles with the growing aggregates. Four-day old D. magna accumulated more than 90% of AuNPs in 0.04 nM AuNP suspensions without any observed mortality. Exposure to pre-aggregated AuNP for 48 h in hard water did not show any significant difference in uptake, suggesting D. magna can also ingest settled AuNP aggregates. D. magna exposed to AuNPs shed their exoskeleton whereas the control did not generate any molts over 48 h. This implies that D. magna removed AuNPs on their exoskeleton by producing molts to decrease any adverse effects of adhered AuNPs. Copyright © 2012 Elsevier B.V. All rights reserved.
FPFH-based graph matching for 3D point cloud registration
NASA Astrophysics Data System (ADS)
Zhao, Jiapeng; Li, Chen; Tian, Lihua; Zhu, Jihua
2018-04-01
Correspondence detection is a vital step in point cloud registration and it can help getting a reliable initial alignment. In this paper, we put forward an advanced point feature-based graph matching algorithm to solve the initial alignment problem of rigid 3D point cloud registration with partial overlap. Specifically, Fast Point Feature Histograms are used to determine the initial possible correspondences firstly. Next, a new objective function is provided to make the graph matching more suitable for partially overlapping point cloud. The objective function is optimized by the simulated annealing algorithm for final group of correct correspondences. Finally, we present a novel set partitioning method which can transform the NP-hard optimization problem into a O(n3)-solvable one. Experiments on the Stanford and UWA public data sets indicates that our method can obtain better result in terms of both accuracy and time cost compared with other point cloud registration methods.
Using MOEA with Redistribution and Consensus Branches to Infer Phylogenies.
Min, Xiaoping; Zhang, Mouzhao; Yuan, Sisi; Ge, Shengxiang; Liu, Xiangrong; Zeng, Xiangxiang; Xia, Ningshao
2017-12-26
In recent years, to infer phylogenies, which are NP-hard problems, more and more research has focused on using metaheuristics. Maximum Parsimony and Maximum Likelihood are two effective ways to conduct inference. Based on these methods, which can also be considered as the optimal criteria for phylogenies, various kinds of multi-objective metaheuristics have been used to reconstruct phylogenies. However, combining these two time-consuming methods results in those multi-objective metaheuristics being slower than a single objective. Therefore, we propose a novel, multi-objective optimization algorithm, MOEA-RC, to accelerate the processes of rebuilding phylogenies using structural information of elites in current populations. We compare MOEA-RC with two representative multi-objective algorithms, MOEA/D and NAGA-II, and a non-consensus version of MOEA-RC on three real-world datasets. The result is, within a given number of iterations, MOEA-RC achieves better solutions than the other algorithms.
Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator
Mohamd Shoukry, Alaa; Gani, Showkat
2017-01-01
Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements. PMID:29209364
Genetic Algorithm for Traveling Salesman Problem with Modified Cycle Crossover Operator.
Hussain, Abid; Muhammad, Yousaf Shad; Nauman Sajid, M; Hussain, Ijaz; Mohamd Shoukry, Alaa; Gani, Showkat
2017-01-01
Genetic algorithms are evolutionary techniques used for optimization purposes according to survival of the fittest idea. These methods do not ensure optimal solutions; however, they give good approximation usually in time. The genetic algorithms are useful for NP-hard problems, especially the traveling salesman problem. The genetic algorithm depends on selection criteria, crossover, and mutation operators. To tackle the traveling salesman problem using genetic algorithms, there are various representations such as binary, path, adjacency, ordinal, and matrix representations. In this article, we propose a new crossover operator for traveling salesman problem to minimize the total distance. This approach has been linked with path representation, which is the most natural way to represent a legal tour. Computational results are also reported with some traditional path representation methods like partially mapped and order crossovers along with new cycle crossover operator for some benchmark TSPLIB instances and found improvements.
Two is better than one; toward a rational design of combinatorial therapy.
Chen, Sheng-Hong; Lahav, Galit
2016-12-01
Drug combination is an appealing strategy for combating the heterogeneity of tumors and evolution of drug resistance. However, the rationale underlying combinatorial therapy is often not well established due to lack of understandings of the specific pathways responding to the drugs, and their temporal dynamics following each treatment. Here we present several emerging trends in harnessing properties of biological systems for the optimal design of drug combinations, including the type of drugs, specific concentration, sequence of addition and the temporal schedule of treatments. We highlight recent studies showing different approaches for efficient design of drug combinations including single-cell signaling dynamics, adaption and pathway crosstalk. Finally, we discuss novel and feasible approaches that can facilitate the optimal design of combinatorial therapy. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lai, King C.; Liu, Da-Jiang; Evans, James W.
2017-12-01
For diffusion of two-dimensional homoepitaxial clusters of N atoms on metal (100) surfaces mediated by edge atom hopping, macroscale continuum theory suggests that the diffusion coefficient scales like DN˜ N-β with β =3 /2 . However, we find quite different and diverse behavior in multiple size regimes. These include: (i) facile diffusion for small sizes N <9 ; (ii) slow nucleation-mediated diffusion with small β <1 for "perfect" sizes N = Np= L2 or L (L +1 ) , for L =3 ,4 , ... having unique ground-state shapes, for moderate sizes 9 ≤N ≤O (102) ; the same also applies for N =Np+3 , Np+ 4 , ... (iii) facile diffusion but with large β >2 for N =Np+1 and Np+2 also for moderate sizes 9 ≤N ≤O (102) ; (iv) merging of the above distinct branches and subsequent anomalous scaling with 1 ≲β <3 /2 , reflecting the quasifacetted structure of clusters, for larger N =O (102) to N =O (103) ; (v) classic scaling with β =3 /2 for very large N =O (103) and above. The specified size ranges apply for typical model parameters. We focus on the moderate size regime where we show that diffusivity cycles quasiperiodically from the slowest branch for Np+3 (not Np) to the fastest branch for Np+1 . Behavior is quantified by kinetic Monte Carlo simulation of an appropriate stochastic lattice-gas model. However, precise analysis must account for a strong enhancement of diffusivity for short time increments due to back correlation in the cluster motion. Further understanding of this enhancement, of anomalous size scaling behavior, and of the merging of various branches, is facilitated by combinatorial analysis of the number of the ground-state and low-lying excited state cluster configurations, and also of kink populations.
Quantum Optimization of Fully Connected Spin Glasses
NASA Astrophysics Data System (ADS)
Venturelli, Davide; Mandrà, Salvatore; Knysh, Sergey; O'Gorman, Bryan; Biswas, Rupak; Smelyanskiy, Vadim
2015-07-01
Many NP-hard problems can be seen as the task of finding a ground state of a disordered highly connected Ising spin glass. If solutions are sought by means of quantum annealing, it is often necessary to represent those graphs in the annealer's hardware by means of the graph-minor embedding technique, generating a final Hamiltonian consisting of coupled chains of ferromagnetically bound spins, whose binding energy is a free parameter. In order to investigate the effect of embedding on problems of interest, the fully connected Sherrington-Kirkpatrick model with random ±1 couplings is programmed on the D-Wave TwoTM annealer using up to 270 qubits interacting on a Chimera-type graph. We present the best embedding prescriptions for encoding the Sherrington-Kirkpatrick problem in the Chimera graph. The results indicate that the optimal choice of embedding parameters could be associated with the emergence of the spin-glass phase of the embedded problem, whose presence was previously uncertain. This optimal parameter setting allows the performance of the quantum annealer to compete with (and potentially outperform, in the absence of analog control errors) optimized simulated annealing algorithms.
New optimization model for routing and spectrum assignment with nodes insecurity
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-04-01
By adopting the orthogonal frequency division multiplexing technology, elastic optical networks can provide the flexible and variable bandwidth allocation to each connection request and get higher spectrum utilization. The routing and spectrum assignment problem in elastic optical network is a well-known NP-hard problem. In addition, information security has received worldwide attention. We combine these two problems to investigate the routing and spectrum assignment problem with the guaranteed security in elastic optical network, and establish a new optimization model to minimize the maximum index of the used frequency slots, which is used to determine an optimal routing and spectrum assignment schemes. To solve the model effectively, a hybrid genetic algorithm framework integrating a heuristic algorithm into a genetic algorithm is proposed. The heuristic algorithm is first used to sort the connection requests and then the genetic algorithm is designed to look for an optimal routing and spectrum assignment scheme. In the genetic algorithm, tailor-made crossover, mutation and local search operators are designed. Moreover, simulation experiments are conducted with three heuristic strategies, and the experimental results indicate that the effectiveness of the proposed model and algorithm framework.
Finding long chains in kidney exchange using the traveling salesman problem.
Anderson, Ross; Ashlagi, Itai; Gamarnik, David; Roth, Alvin E
2015-01-20
As of May 2014 there were more than 100,000 patients on the waiting list for a kidney transplant from a deceased donor. Although the preferred treatment is a kidney transplant, every year there are fewer donors than new patients, so the wait for a transplant continues to grow. To address this shortage, kidney paired donation (KPD) programs allow patients with living but biologically incompatible donors to exchange donors through cycles or chains initiated by altruistic (nondirected) donors, thereby increasing the supply of kidneys in the system. In many KPD programs a centralized algorithm determines which exchanges will take place to maximize the total number of transplants performed. This optimization problem has proven challenging both in theory, because it is NP-hard, and in practice, because the algorithms previously used were unable to optimally search over all long chains. We give two new algorithms that use integer programming to optimally solve this problem, one of which is inspired by the techniques used to solve the traveling salesman problem. These algorithms provide the tools needed to find optimal solutions in practice.
Finding long chains in kidney exchange using the traveling salesman problem
Anderson, Ross; Ashlagi, Itai; Gamarnik, David; Roth, Alvin E.
2015-01-01
As of May 2014 there were more than 100,000 patients on the waiting list for a kidney transplant from a deceased donor. Although the preferred treatment is a kidney transplant, every year there are fewer donors than new patients, so the wait for a transplant continues to grow. To address this shortage, kidney paired donation (KPD) programs allow patients with living but biologically incompatible donors to exchange donors through cycles or chains initiated by altruistic (nondirected) donors, thereby increasing the supply of kidneys in the system. In many KPD programs a centralized algorithm determines which exchanges will take place to maximize the total number of transplants performed. This optimization problem has proven challenging both in theory, because it is NP-hard, and in practice, because the algorithms previously used were unable to optimally search over all long chains. We give two new algorithms that use integer programming to optimally solve this problem, one of which is inspired by the techniques used to solve the traveling salesman problem. These algorithms provide the tools needed to find optimal solutions in practice. PMID:25561535
Combinatorial optimization in foundry practice
NASA Astrophysics Data System (ADS)
Antamoshkin, A. N.; Masich, I. S.
2016-04-01
The multicriteria mathematical model of foundry production capacity planning is suggested in the paper. The model is produced in terms of pseudo-Boolean optimization theory. Different search optimization methods were used to solve the obtained problem.
Zhou, Yikang; Li, Gang; Dong, Junkai; Xing, Xin-Hui; Dai, Junbiao; Zhang, Chong
2018-05-01
Facing boosting ability to construct combinatorial metabolic pathways, how to search the metabolic sweet spot has become the rate-limiting step. We here reported an efficient Machine-learning workflow in conjunction with YeastFab Assembly strategy (MiYA) for combinatorial optimizing the large biosynthetic genotypic space of heterologous metabolic pathways in Saccharomyces cerevisiae. Using β-carotene biosynthetic pathway as example, we first demonstrated that MiYA has the power to search only a small fraction (2-5%) of combinatorial space to precisely tune the expression level of each gene with a machine-learning algorithm of an artificial neural network (ANN) ensemble to avoid over-fitting problem when dealing with a small number of training samples. We then applied MiYA to improve the biosynthesis of violacein. Feed with initial data from a colorimetric plate-based, pre-screened pool of 24 strains producing violacein, MiYA successfully predicted, and verified experimentally, the existence of a strain that showed a 2.42-fold titer improvement in violacein production among 3125 possible designs. Furthermore, MiYA was able to largely avoid the branch pathway of violacein biosynthesis that makes deoxyviolacein, and produces very pure violacein. Together, MiYA combines the advantages of standardized building blocks and machine learning to accelerate the Design-Build-Test-Learn (DBTL) cycle for combinatorial optimization of metabolic pathways, which could significantly accelerate the development of microbial cell factories. Copyright © 2018 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Combinatorial chemical bath deposition of CdS contacts for chalcogenide photovoltaics
Mokurala, Krishnaiah; Baranowski, Lauryn L.; de Souza Lucas, Francisco W.; ...
2016-08-01
Contact layers play an important role in thin film solar cells, but new material development and optimization of its thickness is usually a long and tedious process. A high-throughput experimental approach has been used to accelerate the rate of research in photovoltaic (PV) light absorbers and transparent conductive electrodes, however the combinatorial research on contact layers is less common. Here, we report on the chemical bath deposition (CBD) of CdS thin films by combinatorial dip coating technique and apply these contact layers to Cu(In,Ga)Se 2 (CIGSe) and Cu 2ZnSnSe 4 (CZTSe) light absorbers in PV devices. Combinatorial thickness steps ofmore » CdS thin films were achieved by removal of the substrate from the chemical bath, at regular intervals of time, and in equal distance increments. The trends in the photoconversion efficiency and in the spectral response of the PV devices as a function of thickness of CdS contacts were explained with the help of optical and morphological characterization of the CdS thin films. The maximum PV efficiency achieved for the combinatorial dip-coating CBD was similar to that for the PV devices processed using conventional CBD. Finally, the results of this study lead to the conclusion that combinatorial dip-coating can be used to accelerate the optimization of PV device performance of CdS and other candidate contact layers for a wide range of emerging absorbers.« less
Identification of combinatorial drug regimens for treatment of Huntington's disease using Drosophila
NASA Astrophysics Data System (ADS)
Agrawal, Namita; Pallos, Judit; Slepko, Natalia; Apostol, Barbara L.; Bodai, Laszlo; Chang, Ling-Wen; Chiang, Ann-Shyn; Michels Thompson, Leslie; Marsh, J. Lawrence
2005-03-01
We explore the hypothesis that pathology of Huntington's disease involves multiple cellular mechanisms whose contributions to disease are incrementally additive or synergistic. We provide evidence that the photoreceptor neuron degeneration seen in flies expressing mutant human huntingtin correlates with widespread degenerative events in the Drosophila CNS. We use a Drosophila Huntington's disease model to establish dose regimens and protocols to assess the effectiveness of drug combinations used at low threshold concentrations. These proof of principle studies identify at least two potential combinatorial treatment options and illustrate a rapid and cost-effective paradigm for testing and optimizing combinatorial drug therapies while reducing side effects for patients with neurodegenerative disease. The potential for using prescreening in Drosophila to inform combinatorial therapies that are most likely to be effective for testing in mammals is discussed. combinatorial treatments | neurodegeneration
It looks easy! Heuristics for combinatorial optimization problems.
Chronicle, Edward P; MacGregor, James N; Ormerod, Thomas C; Burr, Alistair
2006-04-01
Human performance on instances of computationally intractable optimization problems, such as the travelling salesperson problem (TSP), can be excellent. We have proposed a boundary-following heuristic to account for this finding. We report three experiments with TSPs where the capacity to employ this heuristic was varied. In Experiment 1, participants free to use the heuristic produced solutions significantly closer to optimal than did those prevented from doing so. Experiments 2 and 3 together replicated this finding in larger problems and demonstrated that a potential confound had no effect. In all three experiments, performance was closely matched by a boundary-following model. The results implicate global rather than purely local processes. Humans may have access to simple, perceptually based, heuristics that are suited to some combinatorial optimization tasks.
Parallel Optimization of Polynomials for Large-scale Problems in Stability and Control
NASA Astrophysics Data System (ADS)
Kamyar, Reza
In this thesis, we focus on some of the NP-hard problems in control theory. Thanks to the converse Lyapunov theory, these problems can often be modeled as optimization over polynomials. To avoid the problem of intractability, we establish a trade off between accuracy and complexity. In particular, we develop a sequence of tractable optimization problems --- in the form of Linear Programs (LPs) and/or Semi-Definite Programs (SDPs) --- whose solutions converge to the exact solution of the NP-hard problem. However, the computational and memory complexity of these LPs and SDPs grow exponentially with the progress of the sequence - meaning that improving the accuracy of the solutions requires solving SDPs with tens of thousands of decision variables and constraints. Setting up and solving such problems is a significant challenge. The existing optimization algorithms and software are only designed to use desktop computers or small cluster computers --- machines which do not have sufficient memory for solving such large SDPs. Moreover, the speed-up of these algorithms does not scale beyond dozens of processors. This in fact is the reason we seek parallel algorithms for setting-up and solving large SDPs on large cluster- and/or super-computers. We propose parallel algorithms for stability analysis of two classes of systems: 1) Linear systems with a large number of uncertain parameters; 2) Nonlinear systems defined by polynomial vector fields. First, we develop a distributed parallel algorithm which applies Polya's and/or Handelman's theorems to some variants of parameter-dependent Lyapunov inequalities with parameters defined over the standard simplex. The result is a sequence of SDPs which possess a block-diagonal structure. We then develop a parallel SDP solver which exploits this structure in order to map the computation, memory and communication to a distributed parallel environment. Numerical tests on a supercomputer demonstrate the ability of the algorithm to efficiently utilize hundreds and potentially thousands of processors, and analyze systems with 100+ dimensional state-space. Furthermore, we extend our algorithms to analyze robust stability over more complicated geometries such as hypercubes and arbitrary convex polytopes. Our algorithms can be readily extended to address a wide variety of problems in control such as Hinfinity synthesis for systems with parametric uncertainty and computing control Lyapunov functions.
Huang, Shanjin; Zhang, Yu; Leung, Benjamin; Yuan, Ge; Wang, Gang; Jiang, Hao; Fan, Yingmin; Sun, Qian; Wang, Jianfeng; Xu, Ke; Han, Jung
2013-11-13
Nanoporous (NP) gallium nitride (GaN) as a new class of GaN material has many interesting properties that the conventional GaN material does not have. In this paper, we focus on the mechanical properties of NP GaN, and the detailed physical mechanism of porous GaN in the application of liftoff. A decrease in elastic modulus and hardness was identified in NP GaN compared to the conventional GaN film. The promising application of NP GaN as release layers in the mechanical liftoff of GaN thin films and devices was systematically studied. A phase diagram was generated to correlate the initial NP GaN profiles with the as-overgrown morphologies of the NP structures. The fracture toughness of the NP GaN release layer was studied in terms of the voided-space-ratio. It is shown that the transformed morphologies and fracture toughness of the NP GaN layer after overgrowth strongly depends on the initial porosity of NP GaN templates. The mechanical separation and transfer of a GaN film over a 2 in. wafer was demonstrated, which proves that this technique is useful in practical applications.
ERIC Educational Resources Information Center
Kittredge, Kevin W.; Marine, Susan S.; Taylor, Richard T.
2004-01-01
A molecule possessing other functional groups that could be hydrogenerated is examined, where a variety of metal catalysts are evaluated under similar reaction conditions. Optimizing organic reactions is both time and labor intensive, and the use of a combinatorial parallel synthesis reactor was great time saving device, as per summary.
Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem.
Chen, Xiaopan; Kong, Yunfeng; Dang, Lanxue; Hou, Yane; Ye, Xinyue
2015-01-01
As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.
Charleston, M A
1995-01-01
This article introduces a coherent language base for describing and working with characteristics of combinatorial optimization problems, which is at once general enough to be used in all such problems and precise enough to allow subtle concepts in this field to be discussed unambiguously. An example is provided of how this nomenclature is applied to an instance of the phylogeny problem. Also noted is the beneficial effect, on the landscape of the solution space, of transforming the observed data to account for multiple changes of character state.
Concept of combinatorial de novo design of drug-like molecules by particle swarm optimization.
Hartenfeller, Markus; Proschak, Ewgenij; Schüller, Andreas; Schneider, Gisbert
2008-07-01
We present a fast stochastic optimization algorithm for fragment-based molecular de novo design (COLIBREE, Combinatorial Library Breeding). The search strategy is based on a discrete version of particle swarm optimization. Molecules are represented by a scaffold, which remains constant during optimization, and variable linkers and side chains. Different linkers represent virtual chemical reactions. Side-chain building blocks were obtained from pseudo-retrosynthetic dissection of large compound databases. Here, ligand-based design was performed using chemically advanced template search (CATS) topological pharmacophore similarity to reference ligands as fitness function. A weighting scheme was included for particle swarm optimization-based molecular design, which permits the use of many reference ligands and allows for positive and negative design to be performed simultaneously. In a case study, the approach was applied to the de novo design of potential peroxisome proliferator-activated receptor subtype-selective agonists. The results demonstrate the ability of the technique to cope with large combinatorial chemistry spaces and its applicability to focused library design. The technique was able to perform exploitation of a known scheme and at the same time explorative search for novel ligands within the framework of a given molecular core structure. It thereby represents a practical solution for compound screening in the early hit and lead finding phase of a drug discovery project.
A preliminary study to metaheuristic approach in multilayer radiation shielding optimization
NASA Astrophysics Data System (ADS)
Arif Sazali, Muhammad; Rashid, Nahrul Khair Alang Md; Hamzah, Khaidzir
2018-01-01
Metaheuristics are high-level algorithmic concepts that can be used to develop heuristic optimization algorithms. One of their applications is to find optimal or near optimal solutions to combinatorial optimization problems (COPs) such as scheduling, vehicle routing, and timetabling. Combinatorial optimization deals with finding optimal combinations or permutations in a given set of problem components when exhaustive search is not feasible. A radiation shield made of several layers of different materials can be regarded as a COP. The time taken to optimize the shield may be too high when several parameters are involved such as the number of materials, the thickness of layers, and the arrangement of materials. Metaheuristics can be applied to reduce the optimization time, trading guaranteed optimal solutions for near-optimal solutions in comparably short amount of time. The application of metaheuristics for radiation shield optimization is lacking. In this paper, we present a review on the suitability of using metaheuristics in multilayer shielding design, specifically the genetic algorithm and ant colony optimization algorithm (ACO). We would also like to propose an optimization model based on the ACO method.
A Combinatorial Platform for the Optimization of Peptidomimetic Methyl-Lysine Reader Antagonists
NASA Astrophysics Data System (ADS)
Barnash, Kimberly D.
Post-translational modification of histone N-terminal tails mediates chromatin compaction and, consequently, DNA replication, transcription, and repair. While numerous post-translational modifications decorate histone tails, lysine methylation is an abundant mark important for both gene activation and repression. Methyl-lysine (Kme) readers function through binding mono-, di-, or trimethyl-lysine. Chemical intervention of Kme readers faces numerous challenges due to the broad surface-groove interactions between readers and their cognate histone peptides; yet, the increasing interest in understanding chromatin-modifying complexes suggests tractable lead compounds for Kme readers are critical for elucidating the mechanisms of chromatin dysregulation in disease states and validating the druggability of these domains and complexes. The successful discovery of a peptide-derived chemical probe, UNC3866, for the Polycomb repressive complex 1 (PRC1) chromodomain Kme readers has proven the potential for selective peptidomimetic inhibition of reader function. Unfortunately, the systematic modification of peptides-to-peptidomimetics is a costly and inefficient strategy for target-class hit discovery against Kme readers. Through the exploration of biased chemical space via combinatorial on-bead libraries, we have developed two concurrent methodologies for Kme reader chemical probe discovery. We employ biased peptide combinatorial libraries as a hit discovery strategy with subsequent optimization via iterative targeted libraries. Peptide-to-peptidomimetic optimization through targeted library design was applied based on structure-guided library design around the interaction of the endogenous peptide ligand with three target Kme readers. Efforts targeting the WD40 reader EED led to the discovery of the 3-mer peptidomimetic ligand UNC5115 while combinatorial repurposing of UNC3866 for off-target chromodomains resulted in the discovery of UNC4991, a CDYL/2-selective ligand, and UNC4848, a MPP8 and CDYL/2 ligand. Ultimately, our efforts demonstrate the generalizability of a peptidomimetic combinatorial platform for the optimization of Kme reader ligands in a target class manner.
Network of time-multiplexed optical parametric oscillators as a coherent Ising machine
NASA Astrophysics Data System (ADS)
Marandi, Alireza; Wang, Zhe; Takata, Kenta; Byer, Robert L.; Yamamoto, Yoshihisa
2014-12-01
Finding the ground states of the Ising Hamiltonian maps to various combinatorial optimization problems in biology, medicine, wireless communications, artificial intelligence and social network. So far, no efficient classical and quantum algorithm is known for these problems and intensive research is focused on creating physical systems—Ising machines—capable of finding the absolute or approximate ground states of the Ising Hamiltonian. Here, we report an Ising machine using a network of degenerate optical parametric oscillators (OPOs). Spins are represented with above-threshold binary phases of the OPOs and the Ising couplings are realized by mutual injections. The network is implemented in a single OPO ring cavity with multiple trains of femtosecond pulses and configurable mutual couplings, and operates at room temperature. We programmed a small non-deterministic polynomial time-hard problem on a 4-OPO Ising machine and in 1,000 runs no computational error was detected.
Differential geometric treewidth estimation in adiabatic quantum computation
NASA Astrophysics Data System (ADS)
Wang, Chi; Jonckheere, Edmond; Brun, Todd
2016-10-01
The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.
Sorting by Cuts, Joins, and Whole Chromosome Duplications.
Zeira, Ron; Shamir, Ron
2017-02-01
Genome rearrangement problems have been extensively studied due to their importance in biology. Most studied models assumed a single copy per gene. However, in reality, duplicated genes are common, most notably in cancer. In this study, we make a step toward handling duplicated genes by considering a model that allows the atomic operations of cut, join, and whole chromosome duplication. Given two linear genomes, [Formula: see text] with one copy per gene and [Formula: see text] with two copies per gene, we give a linear time algorithm for computing a shortest sequence of operations transforming [Formula: see text] into [Formula: see text] such that all intermediate genomes are linear. We also show that computing an optimal sequence with fewest duplications is NP-hard.
Kuang, Hua; Ma, Wei; Xu, Liguang; Wang, Libing; Xu, Chuanlai
2013-11-19
Polymerase chain reaction (PCR) is an essential tool in biotechnology laboratories and is becoming increasingly important in other areas of research. Extensive data obtained over the last 12 years has shown that the combination of PCR with nanoscale dispersions can resolve issues in the preparation DNA-based materials that include both inorganic and organic nanoscale components. Unlike conventional DNA hybridization and antibody-antigen complexes, PCR provides a new, effective assembly platform that both increases the yield of DNA-based nanomaterials and allows researchers to program and control assembly with predesigned parameters including those assisted and automated by computers. As a result, this method allows researchers to optimize to the combinatorial selection of the DNA strands for their nanoparticle conjugates. We have developed a PCR approach for producing various nanoscale assemblies including organic motifs such as small molecules, macromolecules, and inorganic building blocks, such as nanorods (NRs), metal, semiconductor, and magnetic nanoparticles (NPs). We start with a nanoscale primer and then modify that building block using the automated steps of PCR-based assembly including initialization, denaturation, annealing, extension, final elongation, and final hold. The intermediate steps of denaturation, annealing, and extension are cyclic, and we use computer control so that the assembled superstructures reach their predetermined complexity. The structures assembled using a small number of PCR cycles show a lower polydispersity than similar discrete structures obtained by direct hybridization between the nanoscale building blocks. Using different building blocks, we assembled the following structural motifs by PCR: (1) discrete nanostructures (NP dimers, NP multimers including trimers, pyramids, tetramers or hexamers, etc.), (2) branched NP superstructures and heterochains, (3) NP satellite-like superstructures, (4) Y-shaped nanostructures and DNA networks, (5) protein-DNA co-assembly structures, and (6) DNA block copolymers including trimers and pentamers. These results affirm that this method can produce a variety of chemical structures and in yields that are tunable. Using PCR-based preparation of DNA-bridged nanostructures, we can program the assembly of the nanoscale blocks through the adjustment of the primer intensity on the assembled units, the number of PCR cycles, or both. The resulting structures are highly complex and diverse and have interesting dynamics and collective properties. Potential applications of these materials include chirooptical materials, probe fabrication, and environmental and biomedical sensors.
Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm
NASA Astrophysics Data System (ADS)
Gen, Mitsuo; Lin, Lin; Cheng, Runwei
Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.
Parallel evolutionary computation in bioinformatics applications.
Pinho, Jorge; Sobral, João Luis; Rocha, Miguel
2013-05-01
A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Split diversity in constrained conservation prioritization using integer linear programming.
Chernomor, Olga; Minh, Bui Quang; Forest, Félix; Klaere, Steffen; Ingram, Travis; Henzinger, Monika; von Haeseler, Arndt
2015-01-01
Phylogenetic diversity (PD) is a measure of biodiversity based on the evolutionary history of species. Here, we discuss several optimization problems related to the use of PD, and the more general measure split diversity (SD), in conservation prioritization.Depending on the conservation goal and the information available about species, one can construct optimization routines that incorporate various conservation constraints. We demonstrate how this information can be used to select sets of species for conservation action. Specifically, we discuss the use of species' geographic distributions, the choice of candidates under economic pressure, and the use of predator-prey interactions between the species in a community to define viability constraints.Despite such optimization problems falling into the area of NP hard problems, it is possible to solve them in a reasonable amount of time using integer programming. We apply integer linear programming to a variety of models for conservation prioritization that incorporate the SD measure.We exemplarily show the results for two data sets: the Cape region of South Africa and a Caribbean coral reef community. Finally, we provide user-friendly software at http://www.cibiv.at/software/pda.
Harmony search algorithm: application to the redundancy optimization problem
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Thien-My, Dao
2010-09-01
The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.
Multi-objective based spectral unmixing for hyperspectral images
NASA Astrophysics Data System (ADS)
Xu, Xia; Shi, Zhenwei
2017-02-01
Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.
Combinatorial Methods for Exploring Complex Materials
NASA Astrophysics Data System (ADS)
Amis, Eric J.
2004-03-01
Combinatorial and high-throughput methods have changed the paradigm of pharmaceutical synthesis and have begun to have a similar impact on materials science research. Already there are examples of combinatorial methods used for inorganic materials, catalysts, and polymer synthesis. For many investigations the primary goal has been discovery of new material compositions that optimize properties such as phosphorescence or catalytic activity. In the midst of the excitement generated to "make things", another opportunity arises for materials science to "understand things" by using the efficiency of combinatorial methods. We have shown that combinatorial methods hold potential for rapid and systematic generation of experimental data over the multi-parameter space typical of investigations in polymer physics. We have applied the combinatorial approach to studies of polymer thin films, biomaterials, polymer blends, filled polymers, and semicrystalline polymers. By combining library fabrication, high-throughput measurements, informatics, and modeling we can demonstrate validation of the methodology, new observations, and developments toward predictive models. This talk will present some of our latest work with applications to coating stability, multi-component formulations, and nanostructure assembly.
Guaranteed Discrete Energy Optimization on Large Protein Design Problems.
Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas
2015-12-08
In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.
Social Milieu Oriented Routing: A New Dimension to Enhance Network Security in WSNs.
Liu, Lianggui; Chen, Li; Jia, Huiling
2016-02-19
In large-scale wireless sensor networks (WSNs), in order to enhance network security, it is crucial for a trustor node to perform social milieu oriented routing to a target a trustee node to carry out trust evaluation. This challenging social milieu oriented routing with more than one end-to-end Quality of Trust (QoT) constraint has proved to be NP-complete. Heuristic algorithms with polynomial and pseudo-polynomial-time complexities are often used to deal with this challenging problem. However, existing solutions cannot guarantee the efficiency of searching; that is, they can hardly avoid obtaining partial optimal solutions during a searching process. Quantum annealing (QA) uses delocalization and tunneling to avoid falling into local minima without sacrificing execution time. This has been proven a promising way to many optimization problems in recently published literatures. In this paper, for the first time, with the help of a novel approach, that is, configuration path-integral Monte Carlo (CPIMC) simulations, a QA-based optimal social trust path (QA_OSTP) selection algorithm is applied to the extraction of the optimal social trust path in large-scale WSNs. Extensive experiments have been conducted, and the experiment results demonstrate that QA_OSTP outperforms its heuristic opponents.
Multiobjective optimization of combinatorial libraries.
Agrafiotis, D K
2002-01-01
Combinatorial chemistry and high-throughput screening have caused a fundamental shift in the way chemists contemplate experiments. Designing a combinatorial library is a controversial art that involves a heterogeneous mix of chemistry, mathematics, economics, experience, and intuition. Although there seems to be little agreement as to what constitutes an ideal library, one thing is certain: only one property or measure seldom defines the quality of the design. In most real-world applications, a good experiment requires the simultaneous optimization of several, often conflicting, design objectives, some of which may be vague and uncertain. In this paper, we discuss a class of algorithms for subset selection rooted in the principles of multiobjective optimization. Our approach is to employ an objective function that encodes all of the desired selection criteria, and then use a simulated annealing or evolutionary approach to identify the optimal (or a nearly optimal) subset from among the vast number of possibilities. Many design criteria can be accommodated, including diversity, similarity to known actives, predicted activity and/or selectivity determined by quantitative structure-activity relationship (QSAR) models or receptor binding models, enforcement of certain property distributions, reagent cost and availability, and many others. The method is robust, convergent, and extensible, offers the user full control over the relative significance of the various objectives in the final design, and permits the simultaneous selection of compounds from multiple libraries in full- or sparse-array format.
A Unified View of Global Instability of Compressible Flow over Open Cavities
2006-03-28
in terms of number of steps realized by the DNS code per second (S/sec) as the number of processors ( np ) increases. For this comparison the “new...computations). It may clearly be seen that both solutions performed comparably well at low number of processors; however, as np increased, the Myrinet...has subsequently been designed, hard -coded and validated at nu modelling. Design characteristics of the code have been a) high-accuracy, b
Multichromosomal median and halving problems under different genomic distances
Tannier, Eric; Zheng, Chunfang; Sankoff, David
2009-01-01
Background Genome median and genome halving are combinatorial optimization problems that aim at reconstructing ancestral genomes as well as the evolutionary events leading from the ancestor to extant species. Exploring complexity issues is a first step towards devising efficient algorithms. The complexity of the median problem for unichromosomal genomes (permutations) has been settled for both the breakpoint distance and the reversal distance. Although the multichromosomal case has often been assumed to be a simple generalization of the unichromosomal case, it is also a relaxation so that complexity in this context does not follow from existing results, and is open for all distances. Results We settle here the complexity of several genome median and halving problems, including a surprising polynomial result for the breakpoint median and guided halving problems in genomes with circular and linear chromosomes, showing that the multichromosomal problem is actually easier than the unichromosomal problem. Still other variants of these problems are NP-complete, including the DCJ double distance problem, previously mentioned as an open question. We list the remaining open problems. Conclusion This theoretical study clears up a wide swathe of the algorithmical study of genome rearrangements with multiple multichromosomal genomes. PMID:19386099
Wang, Kaikai; He, Junhui
2018-04-04
Thin films that integrate antireflective and antibacterial dual functions are not only scientifically interesting but also highly desired in many practical applications. Unfortunately, very few studies have been devoted to the preparation of thin films with both antireflective and antibacterial properties. In this study, mesoporous silica (MSiO 2 ) thin films with uniformly dispersed Ag nanoparticles (Ag NPs) were prepared through a one-pot process, which simultaneously shows high transmittance, excellent antibacterial activity, and mechanical robustness. The optimal thin-film-coated glass substrate demonstrates a maximum transmittance of 98.8% and an average transmittance of 97.1%, respectively, in the spectral range of 400-800 nm. The growth and multiplication of typical bacteria, Escherichia coli ( E. coli), were effectively inhibited on the coated glass. Pencil hardness test, tape adhesion test, and sponge washing test showed favorable mechanical robustness with 5H pencil hardness, 5A grade adhesion, and functional durability of the coating, which promises great potential for applications in various touch screens, windows for hygiene environments, and optical apparatuses for medical uses such as endoscope, and so on.
Microneedles: A New Frontier in Nanomedicine Delivery.
Larrañeta, Eneko; McCrudden, Maelíosa T C; Courtenay, Aaron J; Donnelly, Ryan F
2016-05-01
This review aims to concisely chart the development of two individual research fields, namely nanomedicines, with specific emphasis on nanoparticles (NP) and microparticles (MP), and microneedle (MN) technologies, which have, in the recent past, been exploited in combinatorial approaches for the efficient delivery of a variety of medicinal agents across the skin. This is an emerging and exciting area of pharmaceutical sciences research within the remit of transdermal drug delivery and as such will undoubtedly continue to grow with the emergence of new formulation and fabrication methodologies for particles and MN. Firstly, the fundamental aspects of skin architecture and structure are outlined, with particular reference to their influence on NP and MP penetration. Following on from this, a variety of different particles are described, as are the diverse range of MN modalities currently under development. The review concludes by highlighting some of the novel delivery systems which have been described in the literature exploiting these two approaches and directs the reader towards emerging uses for nanomedicines in combination with MN.
Scheduling Non-Preemptible Jobs to Minimize Peak Demand
Yaw, Sean; Mumey, Brendan
2017-10-28
Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less
Extremal Optimization for Quadratic Unconstrained Binary Problems
NASA Astrophysics Data System (ADS)
Boettcher, S.
We present an implementation of τ-EO for quadratic unconstrained binary optimization (QUBO) problems. To this end, we transform modify QUBO from its conventional Boolean presentation into a spin glass with a random external field on each site. These fields tend to be rather large compared to the typical coupling, presenting EO with a challenging two-scale problem, exploring smaller differences in couplings effectively while sufficiently aligning with those strong external fields. However, we also find a simple solution to that problem that indicates that those external fields apparently tilt the energy landscape to a such a degree such that global minima become more easy to find than those of spin glasses without (or very small) fields. We explore the impact of the weight distribution of the QUBO formulation in the operations research literature and analyze their meaning in a spin-glass language. This is significant because QUBO problems are considered among the main contenders for NP-hard problems that could be solved efficiently on a quantum computer such as D-Wave.
Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment
NASA Astrophysics Data System (ADS)
Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.
2017-03-01
Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.
Scheduling Non-Preemptible Jobs to Minimize Peak Demand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yaw, Sean; Mumey, Brendan
Our paper examines an important problem in smart grid energy scheduling; peaks in power demand are proportionally more expensive to generate and provision for. The issue is exacerbated in local microgrids that do not benefit from the aggregate smoothing experienced by large grids. Demand-side scheduling can reduce these peaks by taking advantage of the fact that there is often flexibility in job start times. We then focus attention on the case where the jobs are non-preemptible, meaning once started, they run to completion. The associated optimization problem is called the peak demand minimization problem, and has been previously shown tomore » be NP-hard. These results include an optimal fixed-parameter tractable algorithm, a polynomial-time approximation algorithm, as well as an effective heuristic that can also be used in an online setting of the problem. Simulation results show that these methods can reduce peak demand by up to 50% versus on-demand scheduling for household power jobs.« less
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
NASA Astrophysics Data System (ADS)
Kassa, Semu Mitiku; Tsegay, Teklay Hailay
2017-08-01
Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.
NASA Astrophysics Data System (ADS)
Chaves-González, José M.; Vega-Rodríguez, Miguel A.; Gómez-Pulido, Juan A.; Sánchez-Pérez, Juan M.
2011-08-01
This article analyses the use of a novel parallel evolutionary strategy to solve complex optimization problems. The work developed here has been focused on a relevant real-world problem from the telecommunication domain to verify the effectiveness of the approach. The problem, known as frequency assignment problem (FAP), basically consists of assigning a very small number of frequencies to a very large set of transceivers used in a cellular phone network. Real data FAP instances are very difficult to solve due to the NP-hard nature of the problem, therefore using an efficient parallel approach which makes the most of different evolutionary strategies can be considered as a good way to obtain high-quality solutions in short periods of time. Specifically, a parallel hyper-heuristic based on several meta-heuristics has been developed. After a complete experimental evaluation, results prove that the proposed approach obtains very high-quality solutions for the FAP and beats any other result published.
NASA Astrophysics Data System (ADS)
Hosomi, Kei; Ozaki, Koichi; Nishiyama, Fumitaka; Takahiro, Katsumi
2018-01-01
Silver nanoparticles (Ag NPs) tarnish easily upon exposure to ambient air, and eventually lose their ability as a plasmonic sensor via weakened localized surface plasmon resonance (LSPR). We have demonstrated the enhancement in plasmonic sensitivity of tarnished Ag NP aggregates to vapors of volatile organic compounds (VOCs) such as ethanol and butanol by Ar plasma exposure. The response of Ag NP aggregates to the VOC vapors was examined by measuring the change in optical extinction spectra before and after exposure to the vapors. The sensitivity of Ag NP aggregates decreased gradually when stored in ambient air. The performance of tarnished Ag NPs for ethanol sensing was recovered by exposure to argon (Ar) plasma for 15 s. The reduction from oxidized Ag to metallic one was recognized, while morphological change was hardly noticeable after the plasma exposure. We conclude, therefore, that a compositional change rather than a morphological change occurred on Ag NP surfaces enhances the sensing ability of tarnished Ag NP aggregates to the VOC vapors.
Tuning the physical properties of amorphous In–Zn–Sn–O thin films using combinatorial sputtering
Ndione, Paul F.; Zakutayev, A.; Kumar, M.; ...
2016-12-05
Transparent conductive oxides and amorphous oxide semiconductors are important materials for many modern technologies. Here, we explore the ternary indium zinc tin oxide (IZTO) using combinatorial synthesis and spatially resolved characterization. The electrical conductivity, work function, absorption onset, mechanical hardness, and elastic modulus of the optically transparent (>85%) amorphous IZTO thin films were found to be in the range of 10–2415 S/cm, 4.6–5.3 eV, 3.20–3.34 eV, 9.0–10.8 GPa, and 111–132 GPa, respectively, depending on the cation composition and the deposition conditions. Furthermore, this study enables control of IZTO performance over a broad range of cation compositions.
USDA-ARS?s Scientific Manuscript database
Ant Colony Optimization (ACO) refers to the family of algorithms inspired by the behavior of real ants and used to solve combinatorial problems such as the Traveling Salesman Problem (TSP).Optimal Foraging Theory (OFT) is an evolutionary principle wherein foraging organisms or insect parasites seek ...
Processing time tolerance-based ACO algorithm for solving job-shop scheduling problem
NASA Astrophysics Data System (ADS)
Luo, Yabo; Waden, Yongo P.
2017-06-01
Ordinarily, Job Shop Scheduling Problem (JSSP) is known as NP-hard problem which has uncertainty and complexity that cannot be handled by a linear method. Thus, currently studies on JSSP are concentrated mainly on applying different methods of improving the heuristics for optimizing the JSSP. However, there still exist many problems for efficient optimization in the JSSP, namely, low efficiency and poor reliability, which can easily trap the optimization process of JSSP into local optima. Therefore, to solve this problem, a study on Ant Colony Optimization (ACO) algorithm combined with constraint handling tactics is carried out in this paper. Further, the problem is subdivided into three parts: (1) Analysis of processing time tolerance-based constraint features in the JSSP which is performed by the constraint satisfying model; (2) Satisfying the constraints by considering the consistency technology and the constraint spreading algorithm in order to improve the performance of ACO algorithm. Hence, the JSSP model based on the improved ACO algorithm is constructed; (3) The effectiveness of the proposed method based on reliability and efficiency is shown through comparative experiments which are performed on benchmark problems. Consequently, the results obtained by the proposed method are better, and the applied technique can be used in optimizing JSSP.
NASA Astrophysics Data System (ADS)
Guo, Peng; Cheng, Wenming; Wang, Yi
2014-10-01
The quay crane scheduling problem (QCSP) determines the handling sequence of tasks at ship bays by a set of cranes assigned to a container vessel such that the vessel's service time is minimized. A number of heuristics or meta-heuristics have been proposed to obtain the near-optimal solutions to overcome the NP-hardness of the problem. In this article, the idea of generalized extremal optimization (GEO) is adapted to solve the QCSP with respect to various interference constraints. The resulting GEO is termed the modified GEO. A randomized searching method for neighbouring task-to-QC assignments to an incumbent task-to-QC assignment is developed in executing the modified GEO. In addition, a unidirectional search decoding scheme is employed to transform a task-to-QC assignment to an active quay crane schedule. The effectiveness of the developed GEO is tested on a suite of benchmark problems introduced by K.H. Kim and Y.M. Park in 2004 (European Journal of Operational Research, Vol. 156, No. 3). Compared with other well-known existing approaches, the experiment results show that the proposed modified GEO is capable of obtaining the optimal or near-optimal solution in a reasonable time, especially for large-sized problems.
Estimates of the absolute error and a scheme for an approximate solution to scheduling problems
NASA Astrophysics Data System (ADS)
Lazarev, A. A.
2009-02-01
An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.
NASA Astrophysics Data System (ADS)
Bai, Danyu
2015-08-01
This paper discusses the flow shop scheduling problem to minimise the total quadratic completion time (TQCT) with release dates in offline and online environments. For this NP-hard problem, the investigation is focused on the performance of two online algorithms based on the Shortest Processing Time among Available jobs rule. Theoretical results indicate the asymptotic optimality of the algorithms as the problem scale is sufficiently large. To further enhance the quality of the original solutions, the improvement scheme is provided for these algorithms. A new lower bound with performance guarantee is provided, and computational experiments show the effectiveness of these heuristics. Moreover, several results of the single-machine TQCT problem with release dates are also obtained for the deduction of the main theorem.
Hidri, Lotfi; Gharbi, Anis; Louly, Mohamed Aly
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures.
Efficient Bounding Schemes for the Two-Center Hybrid Flow Shop Scheduling Problem with Removal Times
2014-01-01
We focus on the two-center hybrid flow shop scheduling problem with identical parallel machines and removal times. The job removal time is the required duration to remove it from a machine after its processing. The objective is to minimize the maximum completion time (makespan). A heuristic and a lower bound are proposed for this NP-Hard problem. These procedures are based on the optimal solution of the parallel machine scheduling problem with release dates and delivery times. The heuristic is composed of two phases. The first one is a constructive phase in which an initial feasible solution is provided, while the second phase is an improvement one. Intensive computational experiments have been conducted to confirm the good performance of the proposed procedures. PMID:25610911
NASA Astrophysics Data System (ADS)
Gutin, Gregory; Kim, Eun Jung; Soleimanfallah, Arezou; Szeider, Stefan; Yeo, Anders
The NP-hard general factor problem asks, given a graph and for each vertex a list of integers, whether the graph has a spanning subgraph where each vertex has a degree that belongs to its assigned list. The problem remains NP-hard even if the given graph is bipartite with partition U ⊎ V, and each vertex in U is assigned the list {1}; this subproblem appears in the context of constraint programming as the consistency problem for the extended global cardinality constraint. We show that this subproblem is fixed-parameter tractable when parameterized by the size of the second partite set V. More generally, we show that the general factor problem for bipartite graphs, parameterized by |V |, is fixed-parameter tractable as long as all vertices in U are assigned lists of length 1, but becomes W[1]-hard if vertices in U are assigned lists of length at most 2. We establish fixed-parameter tractability by reducing the problem instance to a bounded number of acyclic instances, each of which can be solved in polynomial time by dynamic programming.
Accurate multiple sequence-structure alignment of RNA sequences using combinatorial optimization.
Bauer, Markus; Klau, Gunnar W; Reinert, Knut
2007-07-27
The discovery of functional non-coding RNA sequences has led to an increasing interest in algorithms related to RNA analysis. Traditional sequence alignment algorithms, however, fail at computing reliable alignments of low-homology RNA sequences. The spatial conformation of RNA sequences largely determines their function, and therefore RNA alignment algorithms have to take structural information into account. We present a graph-based representation for sequence-structure alignments, which we model as an integer linear program (ILP). We sketch how we compute an optimal or near-optimal solution to the ILP using methods from combinatorial optimization, and present results on a recently published benchmark set for RNA alignments. The implementation of our algorithm yields better alignments in terms of two published scores than the other programs that we tested: This is especially the case with an increasing number of input sequences. Our program LARA is freely available for academic purposes from http://www.planet-lisa.net.
A gradient system solution to Potts mean field equations and its electronic implementation.
Urahama, K; Ueno, S
1993-03-01
A gradient system solution method is presented for solving Potts mean field equations for combinatorial optimization problems subject to winner-take-all constraints. In the proposed solution method the optimum solution is searched by using gradient descent differential equations whose trajectory is confined within the feasible solution space of optimization problems. This gradient system is proven theoretically to always produce a legal local optimum solution of combinatorial optimization problems. An elementary analog electronic circuit implementing the presented method is designed on the basis of current-mode subthreshold MOS technologies. The core constituent of the circuit is the winner-take-all circuit developed by Lazzaro et al. Correct functioning of the presented circuit is exemplified with simulations of the circuits implementing the scheme for solving the shortest path problems.
Xiang, X D
Combinatorial materials synthesis methods and high-throughput evaluation techniques have been developed to accelerate the process of materials discovery and optimization and phase-diagram mapping. Analogous to integrated circuit chips, integrated materials chips containing thousands of discrete different compositions or continuous phase diagrams, often in the form of high-quality epitaxial thin films, can be fabricated and screened for interesting properties. Microspot x-ray method, various optical measurement techniques, and a novel evanescent microwave microscope have been used to characterize the structural, optical, magnetic, and electrical properties of samples on the materials chips. These techniques are routinely used to discover/optimize and map phase diagrams of ferroelectric, dielectric, optical, magnetic, and superconducting materials.
Chandran, Parwathy; Riviere, Jim E; Monteiro-Riviere, Nancy A
2017-05-01
This study investigated the role of nanoparticle size and surface chemistry on biocorona composition and its effect on uptake, toxicity and cellular responses in human umbilical vein endothelial cells (HUVEC), employing 40 and 80 nm gold nanoparticles (AuNP) with branched polyethyleneimine (BPEI), lipoic acid (LA) and polyethylene glycol (PEG) coatings. Proteomic analysis identified 59 hard corona proteins among the various AuNP, revealing largely surface chemistry-dependent signature adsorbomes exhibiting human serum albumin (HSA) abundance. Size distribution analysis revealed the relative instability and aggregation inducing potential of bare and corona-bound BPEI-AuNP, over LA- and PEG-AuNP. Circular dichroism analysis showed surface chemistry-dependent conformational changes of proteins binding to AuNP. Time-dependent uptake of bare, plasma corona (PC) and HSA corona-bound AuNP (HSA-AuNP) showed significant reduction in uptake with PC formation. Cell viability studies demonstrated dose-dependent toxicity of BPEI-AuNP. Transcriptional profiling studies revealed 126 genes, from 13 biological pathways, to be differentially regulated by 40 nm bare and PC-bound BPEI-AuNP (PC-BPEI-AuNP). Furthermore, PC formation relieved the toxicity of cationic BPEI-AuNP by modulating expression of genes involved in DNA damage and repair, heat shock response, mitochondrial energy metabolism, oxidative stress and antioxidant response, and ER stress and unfolded protein response cascades, which were aberrantly expressed in bare BPEI-AuNP-treated cells. NP surface chemistry is shown to play the dominant role over size in determining the biocorona composition, which in turn modulates cell uptake, and biological responses, consequently defining the potential safety and efficacy of nanoformulations.
Turkett, Jeremy A; Bicker, Kevin L
2017-04-10
Growing prevalence of antibiotic resistant bacterial infections necessitates novel antimicrobials, which could be rapidly identified from combinatorial libraries. We report the use of the peptoid library agar diffusion (PLAD) assay to screen peptoid libraries against the ESKAPE pathogens, including the optimization of assay conditions for each pathogen. Work presented here focuses on the tailoring of combinatorial peptoid library design through a detailed study of how peptoid lipophilicity relates to antibacterial potency and mammalian cell toxicity. The information gleaned from this optimization was then applied using the aforementioned screening method to examine the relative potency of peptoid libraries against Staphylococcus aureus, Acinetobacter baumannii, and Enterococcus faecalis prior to and following functionalization with long alkyl tails. The data indicate that overall peptoid hydrophobicity and not simply alkyl tail length is strongly correlated with mammalian cell toxicity. Furthermore, this work demonstrates the utility of the PLAD assay in rapidly evaluating the effect of molecular property changes in similar libraries.
Two-Stage orders sequencing system for mixed-model assembly
NASA Astrophysics Data System (ADS)
Zemczak, M.; Skolud, B.; Krenczyk, D.
2015-11-01
In the paper, the authors focus on the NP-hard problem of orders sequencing, formulated similarly to Car Sequencing Problem (CSP). The object of the research is the assembly line in an automotive industry company, on which few different models of products, each in a certain number of versions, are assembled on the shared resources, set in a line. Such production type is usually determined as a mixed-model production, and arose from the necessity of manufacturing customized products on the basis of very specific orders from single clients. The producers are nowadays obliged to provide each client the possibility to determine a huge amount of the features of the product they are willing to buy, as the competition in the automotive market is large. Due to the previously mentioned nature of the problem (NP-hard), in the given time period only satisfactory solutions are sought, as the optimal solution method has not yet been found. Most of the researchers that implemented inaccurate methods (e.g. evolutionary algorithms) to solving sequencing problems dropped the research after testing phase, as they were not able to obtain reproducible results, and met problems while determining the quality of the received solutions. Therefore a new approach to solving the problem, presented in this paper as a sequencing system is being developed. The sequencing system consists of a set of determined rules, implemented into computer environment. The system itself works in two stages. First of them is connected with the determination of a place in the storage buffer to which certain production orders should be sent. In the second stage of functioning, precise sets of sequences are determined and evaluated for certain parts of the storage buffer under certain criteria.
NASA Astrophysics Data System (ADS)
Zhuo, Zhao; Cai, Shi-Min; Tang, Ming; Lai, Ying-Cheng
2018-04-01
One of the most challenging problems in network science is to accurately detect communities at distinct hierarchical scales. Most existing methods are based on structural analysis and manipulation, which are NP-hard. We articulate an alternative, dynamical evolution-based approach to the problem. The basic principle is to computationally implement a nonlinear dynamical process on all nodes in the network with a general coupling scheme, creating a networked dynamical system. Under a proper system setting and with an adjustable control parameter, the community structure of the network would "come out" or emerge naturally from the dynamical evolution of the system. As the control parameter is systematically varied, the community hierarchies at different scales can be revealed. As a concrete example of this general principle, we exploit clustered synchronization as a dynamical mechanism through which the hierarchical community structure can be uncovered. In particular, for quite arbitrary choices of the nonlinear nodal dynamics and coupling scheme, decreasing the coupling parameter from the global synchronization regime, in which the dynamical states of all nodes are perfectly synchronized, can lead to a weaker type of synchronization organized as clusters. We demonstrate the existence of optimal choices of the coupling parameter for which the synchronization clusters encode accurate information about the hierarchical community structure of the network. We test and validate our method using a standard class of benchmark modular networks with two distinct hierarchies of communities and a number of empirical networks arising from the real world. Our method is computationally extremely efficient, eliminating completely the NP-hard difficulty associated with previous methods. The basic principle of exploiting dynamical evolution to uncover hidden community organizations at different scales represents a "game-change" type of approach to addressing the problem of community detection in complex networks.
Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem
Chen, Xiaopan; Kong, Yunfeng; Dang, Lanxue; Hou, Yane; Ye, Xinyue
2015-01-01
As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods. PMID:26176764
Optimization of TiNP/Ti Content for Si3N4/42CrMo Joints Brazed With Ag-Cu-Ti+TiNP Composite Filler
NASA Astrophysics Data System (ADS)
Wang, Tianpeng; Zhang, Jie; Liu, Chunfeng
The Si3N4 ceramic was brazed to 42CrMo steel by using TiN particles modified braze, and the proportion of TiNp reinforcement and active element Ti was optimized to improve the joint strength. The brazed joints were examined by means of SEM. and EDS investigations. Microstructural examination showed that TiN+Ti5Si3 reaction layer was adjacent to Si3N4, whereas TiC was formed in 42CrMo/filler reaction layer. The Ag-Cu-Ti brazing alloy showed intimate bonding with TiNp and Cu-Ti intermetallics precipitated in the joint. The strength tests demonstrated that the mechanical properties of joints increased and then decreased by increasing the TiNp content when a low Ti content (6wt.%) was supplied. When the Ti content (>6wt.%) was offered sufficiently, the joint strength decreased firstly and then stayed stable with increasing the TiNp content. The maximum four-point bending strength (221 MPa) was obtained when the contents of TiNp and Ti were 10vol.% and 6wt.%, respectively.
Static electric dipole polarizabilities of tri- and tetravalent U, Np, and Pu ions.
Parmar, Payal; Peterson, Kirk A; Clark, Aurora E
2013-11-21
High-quality static electric dipole polarizabilities have been determined for the ground states of the hard-sphere cations of U, Np, and Pu in the III and IV oxidation states. The polarizabilities have been calculated using the numerical finite field technique in a four-component relativistic framework. Methods including Fock-space coupled cluster (FSCC) and Kramers-restricted configuration interaction (KRCI) have been performed in order to account for electron correlation effects. Comparisons between polarizabilities calculated using Dirac-Hartree-Fock (DHF), FSCC, and KRCI methods have been made using both triple- and quadruple-ζ basis sets for U(4+). In addition to the ground state, this study also reports the polarizability data for the first two excited states of U(3+/4+), Np(3+/4+), and Pu(3+/4+) ions at different levels of theory. The values reported in this work are the most accurate to date calculations for the dipole polarizabilities of the hard-sphere tri- and tetravalent actinide ions and may serve as reference values, aiding in the calculation of various electronic and response properties (for example, intermolecular forces, optical properties, etc.) relevant to the nuclear fuel cycle and material science applications.
Tuo, Shouheng; Yong, Longquan; Deng, Fang’an; Li, Yanhai; Lin, Yong; Lu, Qiuju
2017-01-01
Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application. PMID:28403224
Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju
2017-01-01
Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
Combinatorial Multiobjective Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Crossley, William A.; Martin. Eric T.
2002-01-01
The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.
Optimized Reaction Conditions for Amide Bond Formation in DNA-Encoded Combinatorial Libraries.
Li, Yizhou; Gabriele, Elena; Samain, Florent; Favalli, Nicholas; Sladojevich, Filippo; Scheuermann, Jörg; Neri, Dario
2016-08-08
DNA-encoded combinatorial libraries are increasingly being used as tools for the discovery of small organic binding molecules to proteins of biological or pharmaceutical interest. In the majority of cases, synthetic procedures for the formation of DNA-encoded combinatorial libraries incorporate at least one step of amide bond formation between amino-modified DNA and a carboxylic acid. We investigated reaction conditions and established a methodology by using 1-ethyl-3-(3-(dimethylamino)propyl)carbodiimide, 1-hydroxy-7-azabenzotriazole and N,N'-diisopropylethylamine (EDC/HOAt/DIPEA) in combination, which provided conversions greater than 75% for 423/543 (78%) of the carboxylic acids tested. These reaction conditions were efficient with a variety of primary and secondary amines, as well as with various types of amino-modified oligonucleotides. The reaction conditions, which also worked efficiently over a broad range of DNA concentrations and reaction scales, should facilitate the synthesis of novel DNA-encoded combinatorial libraries.
Exact model reduction of combinatorial reaction networks
Conzelmann, Holger; Fey, Dirk; Gilles, Ernst D
2008-01-01
Background Receptors and scaffold proteins usually possess a high number of distinct binding domains inducing the formation of large multiprotein signaling complexes. Due to combinatorial reasons the number of distinguishable species grows exponentially with the number of binding domains and can easily reach several millions. Even by including only a limited number of components and binding domains the resulting models are very large and hardly manageable. A novel model reduction technique allows the significant reduction and modularization of these models. Results We introduce methods that extend and complete the already introduced approach. For instance, we provide techniques to handle the formation of multi-scaffold complexes as well as receptor dimerization. Furthermore, we discuss a new modeling approach that allows the direct generation of exactly reduced model structures. The developed methods are used to reduce a model of EGF and insulin receptor crosstalk comprising 5,182 ordinary differential equations (ODEs) to a model with 87 ODEs. Conclusion The methods, presented in this contribution, significantly enhance the available methods to exactly reduce models of combinatorial reaction networks. PMID:18755034
NASA Astrophysics Data System (ADS)
Wu, Fei; Shao, Shihai; Tang, Youxi
2016-10-01
To enable simultaneous multicast downlink transmit and receive operations on the same frequency band, also known as full-duplex links between an access point and mobile users. The problem of minimizing the total power of multicast transmit beamforming is considered from the viewpoint of ensuring the suppression amount of near-field line-of-sight self-interference and guaranteeing prescribed minimum signal-to-interference-plus-noise-ratio (SINR) at each receiver of the multicast groups. Based on earlier results for multicast groups beamforming, the joint problem is easily shown to be NP-hard. A semidefinite relaxation (SDR) technique with linear program power adjust method is proposed to solve the NP-hard problem. Simulation shows that the proposed method is feasible even when the local receive antenna in nearfield and the mobile user in far-filed are in the same direction.
Antimicrobial acrylic materials with in situ generated silver nanoparticles.
Oei, James D; Zhao, William W; Chu, Lianrui; DeSilva, Mauris N; Ghimire, Abishek; Rawls, H Ralph; Whang, Kyumin
2012-02-01
Polymethyl methacrylate (PMMA) is widely used to treat traumatic head injuries (cranioplasty) and orthopedic injuries (bone cement), but there is a problem with implant-centered infections. With organisms such as Acinetobacter baumannii and methicillin-resistant staphylococcus aureus developing resistance to antibiotics, there is a need for novel antimicrobial delivery mechanisms without risk of developing resistant organisms. To develop a novel antimicrobial implant material by generating silver nanoparticles (AgNP) in situ in PMMA. All PMMA samples with AgNP's (AgNP-PMMA) released Ag(+) ions in vitro for over 28 days. In vitro antimicrobial assays revealed that these samples (even samples with the slowest release rate) inhibited 99.9% of bacteria against four different strains of bacteria. Long-term antimicrobial assay showed a continued antibacterial effect past 28 days. Some AgNP-loaded PMMA groups had comparable Durometer-D hardness (a measure of degree of cure) and modulus to control PMMA, but all experimental groups had slightly lower ultimate transverse strengths. AgNP-PMMA demonstrated a tremendously broad-spectrum and long-intermediate-term antimicrobial effect with comparable mechanical properties to control PMMA. Current efforts are focused on further improving mechanical properties by reducing AgNP loading and assessing fatigue properties. Copyright © 2011 Wiley Periodicals, Inc.
Dhat, Shalaka; Pund, Swati; Kokare, Chandrakant; Sharma, Pankaj; Shrivastava, Birendra
2017-01-01
Rapidly evolving technical and regulatory landscapes of the pharmaceutical product development necessitates risk management with application of multivariate analysis using Process Analytical Technology (PAT) and Quality by Design (QbD). Poorly soluble, high dose drug, Satranidazole was optimally nanoprecipitated (SAT-NP) employing principles of Formulation by Design (FbD). The potential risk factors influencing the critical quality attributes (CQA) of SAT-NP were identified using Ishikawa diagram. Plackett-Burman screening design was adopted to screen the eight critical formulation and process parameters influencing the mean particle size, zeta potential and dissolution efficiency at 30min in pH7.4 dissolution medium. Pareto charts (individual and cumulative) revealed three most critical factors influencing CQA of SAT-NP viz. aqueous stabilizer (Polyvinyl alcohol), release modifier (Eudragit® S 100) and volume of aqueous phase. The levels of these three critical formulation attributes were optimized by FbD within established design space to minimize mean particle size, poly dispersity index, and maximize encapsulation efficiency of SAT-NP. Lenth's and Bayesian analysis along with mathematical modeling of results allowed identification and quantification of critical formulation attributes significantly active on the selected CQAs. The optimized SAT-NP exhibited mean particle size; 216nm, polydispersity index; 0.250, zeta potential; -3.75mV and encapsulation efficiency; 78.3%. The product was lyophilized using mannitol to form readily redispersible powder. X-ray diffraction analysis confirmed the conversion of crystalline SAT to amorphous form. In vitro release of SAT-NP in gradually pH changing media showed <20% release in pH1.2 and pH6.8 in 5h, while, complete release (>95%) in pH7.4 in next 3h, indicative of burst release after a lag time. This investigation demonstrated effective application of risk management and QbD tools in developing site-specific release SAT-NP by nanoprecipitation. Copyright © 2016 Elsevier B.V. All rights reserved.
A Discriminative Sentence Compression Method as Combinatorial Optimization Problem
NASA Astrophysics Data System (ADS)
Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki
In the study of automatic summarization, the main research topic was `important sentence extraction' but nowadays `sentence compression' is a hot research topic. Conventional sentence compression methods usually transform a given sentence into a parse tree or a dependency tree, and modify them to get a shorter sentence. However, this method is sometimes too rigid. In this paper, we regard sentence compression as an combinatorial optimization problem that extracts an optimal subsequence of words. Hori et al. also proposed a similar method, but they used only a small number of features and their weights were tuned by hand. We introduce a large number of features such as part-of-speech bigrams and word position in the sentence. Furthermore, we train the system by discriminative learning. According to our experiments, our method obtained better score than other methods with statistical significance.
A set partitioning reformulation for the multiple-choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Voß, Stefan; Lalla-Ruiz, Eduardo
2016-05-01
The Multiple-choice Multidimensional Knapsack Problem (MMKP) is a well-known ?-hard combinatorial optimization problem that has received a lot of attention from the research community as it can be easily translated to several real-world problems arising in areas such as allocating resources, reliability engineering, cognitive radio networks, cloud computing, etc. In this regard, an exact model that is able to provide high-quality feasible solutions for solving it or being partially included in algorithmic schemes is desirable. The MMKP basically consists of finding a subset of objects that maximizes the total profit while observing some capacity restrictions. In this article a reformulation of the MMKP as a set partitioning problem is proposed to allow for new insights into modelling the MMKP. The computational experimentation provides new insights into the problem itself and shows that the new model is able to improve on the best of the known results for some of the most common benchmark instances.
Hierarchical heuristic search using a Gaussian mixture model for UAV coverage planning.
Lin, Lanny; Goodrich, Michael A
2014-12-01
During unmanned aerial vehicle (UAV) search missions, efficient use of UAV flight time requires flight paths that maximize the probability of finding the desired subject. The probability of detecting the desired subject based on UAV sensor information can vary in different search areas due to environment elements like varying vegetation density or lighting conditions, making it likely that the UAV can only partially detect the subject. This adds another dimension of complexity to the already difficult (NP-Hard) problem of finding an optimal search path. We present a new class of algorithms that account for partial detection in the form of a task difficulty map and produce paths that approximate the payoff of optimal solutions. The algorithms use the mode goodness ratio heuristic that uses a Gaussian mixture model to prioritize search subregions. The algorithms search for effective paths through the parameter space at different levels of resolution. We compare the performance of the new algorithms against two published algorithms (Bourgault's algorithm and LHC-GW-CONV algorithm) in simulated searches with three real search and rescue scenarios, and show that the new algorithms outperform existing algorithms significantly and can yield efficient paths that yield payoffs near the optimal.
Achieving Crossed Strong Barrier Coverage in Wireless Sensor Network.
Han, Ruisong; Yang, Wei; Zhang, Li
2018-02-10
Barrier coverage has been widely used to detect intrusions in wireless sensor networks (WSNs). It can fulfill the monitoring task while extending the lifetime of the network. Though barrier coverage in WSNs has been intensively studied in recent years, previous research failed to consider the problem of intrusion in transversal directions. If an intruder knows the deployment configuration of sensor nodes, then there is a high probability that it may traverse the whole target region from particular directions, without being detected. In this paper, we introduce the concept of crossed barrier coverage that can overcome this defect. We prove that the problem of finding the maximum number of crossed barriers is NP-hard and integer linear programming (ILP) is used to formulate the optimization problem. The branch-and-bound algorithm is adopted to determine the maximum number of crossed barriers. In addition, we also propose a multi-round shortest path algorithm (MSPA) to solve the optimization problem, which works heuristically to guarantee efficiency while maintaining near-optimal solutions. Several conventional algorithms for finding the maximum number of disjoint strong barriers are also modified to solve the crossed barrier problem and for the purpose of comparison. Extensive simulation studies demonstrate the effectiveness of MSPA.
Rate Adaptive Based Resource Allocation with Proportional Fairness Constraints in OFDMA Systems
Yin, Zhendong; Zhuang, Shufeng; Wu, Zhilu; Ma, Bo
2015-01-01
Orthogonal frequency division multiple access (OFDMA), which is widely used in the wireless sensor networks, allows different users to obtain different subcarriers according to their subchannel gains. Therefore, how to assign subcarriers and power to different users to achieve a high system sum rate is an important research area in OFDMA systems. In this paper, the focus of study is on the rate adaptive (RA) based resource allocation with proportional fairness constraints. Since the resource allocation is a NP-hard and non-convex optimization problem, a new efficient resource allocation algorithm ACO-SPA is proposed, which combines ant colony optimization (ACO) and suboptimal power allocation (SPA). To reduce the computational complexity, the optimization problem of resource allocation in OFDMA systems is separated into two steps. For the first one, the ant colony optimization algorithm is performed to solve the subcarrier allocation. Then, the suboptimal power allocation algorithm is developed with strict proportional fairness, and the algorithm is based on the principle that the sums of power and the reciprocal of channel-to-noise ratio for each user in different subchannels are equal. To support it, plenty of simulation results are presented. In contrast with root-finding and linear methods, the proposed method provides better performance in solving the proportional resource allocation problem in OFDMA systems. PMID:26426016
A hybrid heuristic for the multiple choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd
2013-08-01
In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.
On simulated annealing phase transitions in phylogeny reconstruction.
Strobl, Maximilian A R; Barker, Daniel
2016-08-01
Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Wang, Zimeng; Meenach, Samantha A
2017-12-01
Nanocomposite microparticle (nCmP) systems exhibit promising potential in the application of therapeutics for pulmonary drug delivery. This work aimed at identifying the optimal spray-drying condition(s) to prepare nCmP with specific drug delivery properties including small aerodynamic diameter, effective nanoparticle (NP) redispersion upon nCmP exposure to an aqueous solution, high drug loading, and low water content. Acetalated dextran (Ac-Dex) was used to form NPs, curcumin was used as a model drug, and mannitol was the excipient in the nCmP formulation. Box-Behnken design was applied using Design-Expert software for nCmP parameter optimization. NP ratio (NP%) and feed concentration (Fc) are significant parameters that affect the aerodynamic diameters of nCmP systems. NP% is also a significant parameter that affects the drug loading. Fc is the only parameter that influenced the water content of the particles significantly. All nCmP systems could be completely redispersed into the parent NPs, indicating that none of the factors have an influence on this property within the design range. The optimal spray-drying condition to prepare nCmP with a small aerodynamic diameter, redispersion of the NPs, low water content, and high drug loading is 80% NP%, 0.5% Fc, and an inlet temperature lower than 130°C. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Preparation of nanodispersions by solvent displacement using the Venturi tube.
García-Salazar, Gilberto; de la Luz Zambrano-Zaragoza, María; Quintanar-Guerrero, David
2018-05-02
The Venturi tube (VT) is an apparatus that produces turbulence which is taken advantage of to produce nanoparticles (NP) by solvent displacement. The objective of this study was to evaluate the potential of this device for preparing NP of poly-ε-caprolactone. Response Surface Methodology was used to determine the effect of the operating conditions and optimization. The NP produced by VT were characterized by Dynamic Light-Scattering to determine their particle size distribution (PS) and polydispersity index (PDI). Results showed that the Reynolds number (Re) has a strong effect on both PS and process yield (PY).The turbulence regime is key to the efficient formation of NP. The optimal conditions for obtaining NP were a polymer concentration of 1.6 w/v, a recirculation rate of 4.8 L/min, and a stabilizer concentration of 1.1 w/v. The predicted response of the PY was 99.7%, with a PS of 333 nm, and a PDI of 0.2. Maintaining the same preparation conditions will make it possible to obtain NP using other polymers with similar properties. Our results show that VT is a reproducible and versatile method for manufacturing NP, and so may be a feasible method for industrial-scale nanoprecipitation production. Copyright © 2018 Elsevier B.V. All rights reserved.
Learning Search Control Knowledge for Deep Space Network Scheduling
NASA Technical Reports Server (NTRS)
Gratch, Jonathan; Chien, Steve; DeJong, Gerald
1993-01-01
While the general class of most scheduling problems is NP-hard in worst-case complexity, in practice, for specific distributions of problems and constraints, domain-specific solutions have been shown to perform in much better than exponential time.
Li, Yanhui; Guo, Hao; Wang, Lin; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment.
NASA Astrophysics Data System (ADS)
Green, Martin L.; Takeuchi, Ichiro; Hattrick-Simpers, Jason R.
2013-06-01
High throughput (combinatorial) materials science methodology is a relatively new research paradigm that offers the promise of rapid and efficient materials screening, optimization, and discovery. The paradigm started in the pharmaceutical industry but was rapidly adopted to accelerate materials research in a wide variety of areas. High throughput experiments are characterized by synthesis of a "library" sample that contains the materials variation of interest (typically composition), and rapid and localized measurement schemes that result in massive data sets. Because the data are collected at the same time on the same "library" sample, they can be highly uniform with respect to fixed processing parameters. This article critically reviews the literature pertaining to applications of combinatorial materials science for electronic, magnetic, optical, and energy-related materials. It is expected that high throughput methodologies will facilitate commercialization of novel materials for these critically important applications. Despite the overwhelming evidence presented in this paper that high throughput studies can effectively inform commercial practice, in our perception, it remains an underutilized research and development tool. Part of this perception may be due to the inaccessibility of proprietary industrial research and development practices, but clearly the initial cost and availability of high throughput laboratory equipment plays a role. Combinatorial materials science has traditionally been focused on materials discovery, screening, and optimization to combat the extremely high cost and long development times for new materials and their introduction into commerce. Going forward, combinatorial materials science will also be driven by other needs such as materials substitution and experimental verification of materials properties predicted by modeling and simulation, which have recently received much attention with the advent of the Materials Genome Initiative. Thus, the challenge for combinatorial methodology will be the effective coupling of synthesis, characterization and theory, and the ability to rapidly manage large amounts of data in a variety of formats.
Focusing on the golden ball metaheuristic: an extended study on a wider set of problems.
Osaba, E; Diaz, F; Carballedo, R; Onieva, E; Perallos, A
2014-01-01
Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.
Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems
Osaba, E.; Diaz, F.; Carballedo, R.; Onieva, E.; Perallos, A.
2014-01-01
Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results. PMID:25165742
Puthiyedth, Nisha; Riveros, Carlos; Berretta, Regina; Moscato, Pablo
2015-01-01
Background The joint study of multiple datasets has become a common technique for increasing statistical power in detecting biomarkers obtained from smaller studies. The approach generally followed is based on the fact that as the total number of samples increases, we expect to have greater power to detect associations of interest. This methodology has been applied to genome-wide association and transcriptomic studies due to the availability of datasets in the public domain. While this approach is well established in biostatistics, the introduction of new combinatorial optimization models to address this issue has not been explored in depth. In this study, we introduce a new model for the integration of multiple datasets and we show its application in transcriptomics. Methods We propose a new combinatorial optimization problem that addresses the core issue of biomarker detection in integrated datasets. Optimal solutions for this model deliver a feature selection from a panel of prospective biomarkers. The model we propose is a generalised version of the (α,β)-k-Feature Set problem. We illustrate the performance of this new methodology via a challenging meta-analysis task involving six prostate cancer microarray datasets. The results are then compared to the popular RankProd meta-analysis tool and to what can be obtained by analysing the individual datasets by statistical and combinatorial methods alone. Results Application of the integrated method resulted in a more informative signature than the rank-based meta-analysis or individual dataset results, and overcomes problems arising from real world datasets. The set of genes identified is highly significant in the context of prostate cancer. The method used does not rely on homogenisation or transformation of values to a common scale, and at the same time is able to capture markers associated with subgroups of the disease. PMID:26106884
Kwok, T; Smith, K A
2000-09-01
The aim of this paper is to study both the theoretical and experimental properties of chaotic neural network (CNN) models for solving combinatorial optimization problems. Previously we have proposed a unifying framework which encompasses the three main model types, namely, Chen and Aihara's chaotic simulated annealing (CSA) with decaying self-coupling, Wang and Smith's CSA with decaying timestep, and the Hopfield network with chaotic noise. Each of these models can be represented as a special case under the framework for certain conditions. This paper combines the framework with experimental results to provide new insights into the effect of the chaotic neurodynamics of each model. By solving the N-queen problem of various sizes with computer simulations, the CNN models are compared in different parameter spaces, with optimization performance measured in terms of feasibility, efficiency, robustness and scalability. Furthermore, characteristic chaotic neurodynamics crucial to effective optimization are identified, together with a guide to choosing the corresponding model parameters.
Podlewska, Sabina; Czarnecki, Wojciech M; Kafel, Rafał; Bojarski, Andrzej J
2017-02-27
The growing computational abilities of various tools that are applied in the broadly understood field of computer-aided drug design have led to the extreme popularity of virtual screening in the search for new biologically active compounds. Most often, the source of such molecules consists of commercially available compound databases, but they can also be searched for within the libraries of structures generated in silico from existing ligands. Various computational combinatorial approaches are based solely on the chemical structure of compounds, using different types of substitutions for new molecules formation. In this study, the starting point for combinatorial library generation was the fingerprint referring to the optimal substructural composition in terms of the activity toward a considered target, which was obtained using a machine learning-based optimization procedure. The systematic enumeration of all possible connections between preferred substructures resulted in the formation of target-focused libraries of new potential ligands. The compounds were initially assessed by machine learning methods using a hashed fingerprint to represent molecules; the distribution of their physicochemical properties was also investigated, as well as their synthetic accessibility. The examination of various fingerprints and machine learning algorithms indicated that the Klekota-Roth fingerprint and support vector machine were an optimal combination for such experiments. This study was performed for 8 protein targets, and the obtained compound sets and their characterization are publically available at http://skandal.if-pan.krakow.pl/comb_lib/ .
Robust learning for optimal treatment decision with NP-dimensionality
Shi, Chengchun; Song, Rui; Lu, Wenbin
2016-01-01
In order to identify important variables that are involved in making optimal treatment decision, Lu, Zhang and Zeng (2013) proposed a penalized least squared regression framework for a fixed number of predictors, which is robust against the misspecification of the conditional mean model. Two problems arise: (i) in a world of explosively big data, effective methods are needed to handle ultra-high dimensional data set, for example, with the dimension of predictors is of the non-polynomial (NP) order of the sample size; (ii) both the propensity score and conditional mean models need to be estimated from data under NP dimensionality. In this paper, we propose a robust procedure for estimating the optimal treatment regime under NP dimensionality. In both steps, penalized regressions are employed with the non-concave penalty function, where the conditional mean model of the response given predictors may be misspecified. The asymptotic properties, such as weak oracle properties, selection consistency and oracle distributions, of the proposed estimators are investigated. In addition, we study the limiting distribution of the estimated value function for the obtained optimal treatment regime. The empirical performance of the proposed estimation method is evaluated by simulations and an application to a depression dataset from the STAR*D study. PMID:28781717
Optimizing Perioperative Decision Making: Improved Information for Clinical Workflow Planning
Doebbeling, Bradley N.; Burton, Matthew M.; Wiebke, Eric A.; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph
2012-01-01
Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40–70% of hospital revenues and 30–40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction. PMID:23304284
Optimizing perioperative decision making: improved information for clinical workflow planning.
Doebbeling, Bradley N; Burton, Matthew M; Wiebke, Eric A; Miller, Spencer; Baxter, Laurence; Miller, Donald; Alvarez, Jorge; Pekny, Joseph
2012-01-01
Perioperative care is complex and involves multiple interconnected subsystems. Delayed starts, prolonged cases and overtime are common. Surgical procedures account for 40-70% of hospital revenues and 30-40% of total costs. Most planning and scheduling in healthcare is done without modern planning tools, which have potential for improving access by assisting in operations planning support. We identified key planning scenarios of interest to perioperative leaders, in order to examine the feasibility of applying combinatorial optimization software solving some of those planning issues in the operative setting. Perioperative leaders desire a broad range of tools for planning and assessing alternate solutions. Our modeled solutions generated feasible solutions that varied as expected, based on resource and policy assumptions and found better utilization of scarce resources. Combinatorial optimization modeling can effectively evaluate alternatives to support key decisions for planning clinical workflow and improving care efficiency and satisfaction.
Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres; ...
2014-10-23
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awwal, Abdul; Diaz-Ramirez, Victor H.; Cuevas, Andres
Composite correlation filters are used for solving a wide variety of pattern recognition problems. These filters are given by a combination of several training templates chosen by a designer in an ad hoc manner. In this work, we present a new approach for the design of composite filters based on multi-objective combinatorial optimization. Given a vast search space of training templates, an iterative algorithm is used to synthesize a filter with an optimized performance in terms of several competing criteria. Furthermore, by employing a suggested binary-search procedure a filter bank with a minimum number of filters can be constructed, formore » a prespecified trade-off of performance metrics. Computer simulation results obtained with the proposed method in recognizing geometrically distorted versions of a target in cluttered and noisy scenes are discussed and compared in terms of recognition performance and complexity with existing state-of-the-art filters.« less
Rationally reduced libraries for combinatorial pathway optimization minimizing experimental effort.
Jeschek, Markus; Gerngross, Daniel; Panke, Sven
2016-03-31
Rational flux design in metabolic engineering approaches remains difficult since important pathway information is frequently not available. Therefore empirical methods are applied that randomly change absolute and relative pathway enzyme levels and subsequently screen for variants with improved performance. However, screening is often limited on the analytical side, generating a strong incentive to construct small but smart libraries. Here we introduce RedLibs (Reduced Libraries), an algorithm that allows for the rational design of smart combinatorial libraries for pathway optimization thereby minimizing the use of experimental resources. We demonstrate the utility of RedLibs for the design of ribosome-binding site libraries by in silico and in vivo screening with fluorescent proteins and perform a simple two-step optimization of the product selectivity in the branched multistep pathway for violacein biosynthesis, indicating a general applicability for the algorithm and the proposed heuristics. We expect that RedLibs will substantially simplify the refactoring of synthetic metabolic pathways.
Automatic Summarization as a Combinatorial Optimization Problem
NASA Astrophysics Data System (ADS)
Hirao, Tsutomu; Suzuki, Jun; Isozaki, Hideki
We derived the oracle summary with the highest ROUGE score that can be achieved by integrating sentence extraction with sentence compression from the reference abstract. The analysis results of the oracle revealed that summarization systems have to assign an appropriate compression rate for each sentence in the document. In accordance with this observation, this paper proposes a summarization method as a combinatorial optimization: selecting the set of sentences that maximize the sum of the sentence scores from the pool which consists of the sentences with various compression rates, subject to length constrains. The score of the sentence is defined by its compression rate, content words and positional information. The parameters for the compression rates and positional information are optimized by minimizing the loss between score of oracles and that of candidates. The results obtained from TSC-2 corpus showed that our method outperformed the previous systems with statistical significance.
Combinatorial optimization games
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, X.; Ibaraki, Toshihide; Nagamochi, Hiroshi
1997-06-01
We introduce a general integer programming formulation for a class of combinatorial optimization games, which immediately allows us to improve the algorithmic result for finding amputations in the core (an important solution concept in cooperative game theory) of the network flow game on simple networks by Kalai and Zemel. An interesting result is a general theorem that the core for this class of games is nonempty if and only if a related linear program has an integer optimal solution. We study the properties for this mathematical condition to hold for several interesting problems, and apply them to resolve algorithmic andmore » complexity issues for their cores along the line as put forward in: decide whether the core is empty; if the core is empty, find an imputation in the core; given an imputation x, test whether x is in the core. We also explore the properties of totally balanced games in this succinct formulation of cooperative games.« less
Sequential Test Strategies for Multiple Fault Isolation
NASA Technical Reports Server (NTRS)
Shakeri, M.; Pattipati, Krishna R.; Raghavan, V.; Patterson-Hine, Ann; Kell, T.
1997-01-01
In this paper, we consider the problem of constructing near optimal test sequencing algorithms for diagnosing multiple faults in redundant (fault-tolerant) systems. The computational complexity of solving the optimal multiple-fault isolation problem is super-exponential, that is, it is much more difficult than the single-fault isolation problem, which, by itself, is NP-hard. By employing concepts from information theory and Lagrangian relaxation, we present several static and dynamic (on-line or interactive) test sequencing algorithms for the multiple fault isolation problem that provide a trade-off between the degree of suboptimality and computational complexity. Furthermore, we present novel diagnostic strategies that generate a static diagnostic directed graph (digraph), instead of a static diagnostic tree, for multiple fault diagnosis. Using this approach, the storage complexity of the overall diagnostic strategy reduces substantially. Computational results based on real-world systems indicate that the size of a static multiple fault strategy is strictly related to the structure of the system, and that the use of an on-line multiple fault strategy can diagnose faults in systems with as many as 10,000 failure sources.
Al-Khatib, Ra'ed M; Rashid, Nur'Aini Abdul; Abdullah, Rosni
2011-08-01
The secondary structure of RNA pseudoknots has been extensively inferred and scrutinized by computational approaches. Experimental methods for determining RNA structure are time consuming and tedious; therefore, predictive computational approaches are required. Predicting the most accurate and energy-stable pseudoknot RNA secondary structure has been proven to be an NP-hard problem. In this paper, a new RNA folding approach, termed MSeeker, is presented; it includes KnotSeeker (a heuristic method) and Mfold (a thermodynamic algorithm). The global optimization of this thermodynamic heuristic approach was further enhanced by using a case-based reasoning technique as a local optimization method. MSeeker is a proposed algorithm for predicting RNA pseudoknot structure from individual sequences, especially long ones. This research demonstrates that MSeeker improves the sensitivity and specificity of existing RNA pseudoknot structure predictions. The performance and structural results from this proposed method were evaluated against seven other state-of-the-art pseudoknot prediction methods. The MSeeker method had better sensitivity than the DotKnot, FlexStem, HotKnots, pknotsRG, ILM, NUPACK and pknotsRE methods, with 79% of the predicted pseudoknot base-pairs being correct.
Partial branch and bound algorithm for improved data association in multiframe processing
NASA Astrophysics Data System (ADS)
Poore, Aubrey B.; Yan, Xin
1999-07-01
A central problem in multitarget, multisensor, and multiplatform tracking remains that of data association. Lagrangian relaxation methods have shown themselves to yield near optimal answers in real-time. The necessary improvement in the quality of these solutions warrants a continuing interest in these methods. These problems are NP-hard; the only known methods for solving them optimally are enumerative in nature with branch-and-bound being most efficient. Thus, the development of methods less than a full branch-and-bound are needed for improving the quality. Such methods as K-best, local search, and randomized search have been proposed to improve the quality of the relaxation solution. Here, a partial branch-and-bound technique along with adequate branching and ordering rules are developed. Lagrangian relaxation is used as a branching method and as a method to calculate the lower bound for subproblems. The result shows that the branch-and-bound framework greatly improves the resolution quality of the Lagrangian relaxation algorithm and yields better multiple solutions in less time than relaxation alone.
Interactive Data Exploration with Smart Drill-Down
Joglekar, Manas; Garcia-Molina, Hector; Parameswaran, Aditya
2017-01-01
We present smart drill-down, an operator for interactively exploring a relational table to discover and summarize “interesting” groups of tuples. Each group of tuples is described by a rule. For instance, the rule (a, b, ⋆, 1000) tells us that there are a thousand tuples with value a in the first column and b in the second column (and any value in the third column). Smart drill-down presents an analyst with a list of rules that together describe interesting aspects of the table. The analyst can tailor the definition of interesting, and can interactively apply smart drill-down on an existing rule to explore that part of the table. We demonstrate that the underlying optimization problems are NP-Hard, and describe an algorithm for finding the approximately optimal list of rules to display when the user uses a smart drill-down, and a dynamic sampling scheme for efficiently interacting with large tables. Finally, we perform experiments on real datasets on our experimental prototype to demonstrate the usefulness of smart drill-down and study the performance of our algorithms. PMID:28210096
Dynamic cellular manufacturing system considering machine failure and workload balance
NASA Astrophysics Data System (ADS)
Rabbani, Masoud; Farrokhi-Asl, Hamed; Ravanbakhsh, Mohammad
2018-02-01
Machines are a key element in the production system and their failure causes irreparable effects in terms of cost and time. In this paper, a new multi-objective mathematical model for dynamic cellular manufacturing system (DCMS) is provided with consideration of machine reliability and alternative process routes. In this dynamic model, we attempt to resolve the problem of integrated family (part/machine cell) formation as well as the operators' assignment to the cells. The first objective minimizes the costs associated with the DCMS. The second objective optimizes the labor utilization and, finally, a minimum value of the variance of workload between different cells is obtained by the third objective function. Due to the NP-hard nature of the cellular manufacturing problem, the problem is initially validated by the GAMS software in small-sized problems, and then the model is solved by two well-known meta-heuristic methods including non-dominated sorting genetic algorithm and multi-objective particle swarm optimization in large-scaled problems. Finally, the results of the two algorithms are compared with respect to five different comparison metrics.
A Hybrid Cellular Genetic Algorithm for Multi-objective Crew Scheduling Problem
NASA Astrophysics Data System (ADS)
Jolai, Fariborz; Assadipour, Ghazal
Crew scheduling is one of the important problems of the airline industry. This problem aims to cover a number of flights by crew members, such that all the flights are covered. In a robust scheduling the assignment should be so that the total cost, delays, and unbalanced utilization are minimized. As the problem is NP-hard and the objectives are in conflict with each other, a multi-objective meta-heuristic called CellDE, which is a hybrid cellular genetic algorithm, is implemented as the optimization method. The proposed algorithm provides the decision maker with a set of non-dominated or Pareto-optimal solutions, and enables them to choose the best one according to their preferences. A set of problems of different sizes is generated and solved using the proposed algorithm. Evaluating the performance of the proposed algorithm, three metrics are suggested, and the diversity and the convergence of the achieved Pareto front are appraised. Finally a comparison is made between CellDE and PAES, another meta-heuristic algorithm. The results show the superiority of CellDE.
Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim
2014-01-01
In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem. PMID:25844012
Redundancy allocation problem for k-out-of- n systems with a choice of redundancy strategies
NASA Astrophysics Data System (ADS)
Aghaei, Mahsa; Zeinal Hamadani, Ali; Abouei Ardakan, Mostafa
2017-03-01
To increase the reliability of a specific system, using redundant components is a common method which is called redundancy allocation problem (RAP). Some of the RAP studies have focused on k-out-of- n systems. However, all of these studies assumed predetermined active or standby strategies for each subsystem. In this paper, for the first time, we propose a k-out-of- n system with a choice of redundancy strategies. Therefore, a k-out-of- n series-parallel system is considered when the redundancy strategy can be chosen for each subsystem. In other words, in the proposed model, the redundancy strategy is considered as an additional decision variable and an exact method based on integer programming is used to obtain the optimal solution of the problem. As the optimization of RAP belongs to the NP-hard class of problems, a modified version of genetic algorithm (GA) is also developed. The exact method and the proposed GA are implemented on a well-known test problem and the results demonstrate the efficiency of the new approach compared with the previous studies.
NASA Astrophysics Data System (ADS)
Suess, Daniel; Rudnicki, Łukasz; maciel, Thiago O.; Gross, David
2017-09-01
The outcomes of quantum mechanical measurements are inherently random. It is therefore necessary to develop stringent methods for quantifying the degree of statistical uncertainty about the results of quantum experiments. For the particularly relevant task of quantum state tomography, it has been shown that a significant reduction in uncertainty can be achieved by taking the positivity of quantum states into account. However—the large number of partial results and heuristics notwithstanding—no efficient general algorithm is known that produces an optimal uncertainty region from experimental data, while making use of the prior constraint of positivity. Here, we provide a precise formulation of this problem and show that the general case is NP-hard. Our result leaves room for the existence of efficient approximate solutions, and therefore does not in itself imply that the practical task of quantum uncertainty quantification is intractable. However, it does show that there exists a non-trivial trade-off between optimality and computational efficiency for error regions. We prove two versions of the result: one for frequentist and one for Bayesian statistics.
Surveillance of a 2D Plane Area with 3D Deployed Cameras
Fu, Yi-Ge; Zhou, Jie; Deng, Lei
2014-01-01
As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc.) has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO) algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1) deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2) deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV) of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm. PMID:24469353
Genetic algorithms for protein threading.
Yadgari, J; Amir, A; Unger, R
1998-01-01
Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).
Parallel tempering for the traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Percus, Allon; Wang, Richard; Hyman, Jeffrey
We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less
NASA Astrophysics Data System (ADS)
Pi, E. I.; Siegel, E.
2010-03-01
Siegel[AMS Natl.Mtg.(2002)-Abs.973-60-124] digits logarithmic- law inversion to ONLY BEQS BEC:Quanta/Bosons=#: EMP-like SEVERE VULNERABILITY of ONLY #-networks(VS.ANALOG INvulnerability) via Barabasi NP(VS.dynamics[Not.AMS(5/2009)] critique);(so called)``quantum-computing''(QC) = simple-arithmetic (sansdivision);algorithmiccomplexities:INtractibility/UNdecidabi lity/INefficiency/NONcomputability/HARDNESS(so MIScalled) ``noise''-induced-phase-transition(NIT)ACCELERATION:Cook-Levin theorem Reducibility = RG fixed-points; #-Randomness DEFINITION via WHAT? Query(VS. Goldreich[Not.AMS(2002)] How? mea culpa)= ONLY MBCS hot-plasma v #-clumping NON-random BEC; Modular-Arithmetic Congruences = Signal x Noise PRODUCTS = clock-model; NON-Shor[Physica A,341,586(04)]BEC logarithmic-law inversion factorization: Watkins #-theory U statistical- physics); P=/=NP C-S TRIVIAL Proof: Euclid!!! [(So Miscalled) computational-complexity J-O obviation(3 millennia AGO geometry: NO:CC,``CS'';``Feet of Clay!!!'']; Query WHAT?:Definition: (so MIScalled)``complexity''=UTTER-SIMPLICITY!! v COMPLICATEDNESS MEASURE(S).
Investigations of quantum heuristics for optimization
NASA Astrophysics Data System (ADS)
Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui
We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.
NASA Astrophysics Data System (ADS)
Xue, Wei; Wang, Qi; Wang, Tianyu
2018-04-01
This paper presents an improved parallel combinatory spread spectrum (PC/SS) communication system with the method of double information matching (DIM). Compared with conventional PC/SS system, the new model inherits the advantage of high transmission speed, large information capacity and high security. Besides, the problem traditional system will face is the high bit error rate (BER) and since its data-sequence mapping algorithm. Hence the new model presented shows lower BER and higher efficiency by its optimization of mapping algorithm.
Nanoparticles-cell association predicted by protein corona fingerprints
NASA Astrophysics Data System (ADS)
Palchetti, S.; Digiacomo, L.; Pozzi, D.; Peruzzi, G.; Micarelli, E.; Mahmoudi, M.; Caracciolo, G.
2016-06-01
In a physiological environment (e.g., blood and interstitial fluids) nanoparticles (NPs) will bind proteins shaping a ``protein corona'' layer. The long-lived protein layer tightly bound to the NP surface is referred to as the hard corona (HC) and encodes information that controls NP bioactivity (e.g. cellular association, cellular signaling pathways, biodistribution, and toxicity). Decrypting this complex code has become a priority to predict the NP biological outcomes. Here, we use a library of 16 lipid NPs of varying size (Ø ~ 100-250 nm) and surface chemistry (unmodified and PEGylated) to investigate the relationships between NP physicochemical properties (nanoparticle size, aggregation state and surface charge), protein corona fingerprints (PCFs), and NP-cell association. We found out that none of the NPs' physicochemical properties alone was exclusively able to account for association with human cervical cancer cell line (HeLa). For the entire library of NPs, a total of 436 distinct serum proteins were detected. We developed a predictive-validation modeling that provides a means of assessing the relative significance of the identified corona proteins. Interestingly, a minor fraction of the HC, which consists of only 8 PCFs were identified as main promoters of NP association with HeLa cells. Remarkably, identified PCFs have several receptors with high level of expression on the plasma membrane of HeLa cells.In a physiological environment (e.g., blood and interstitial fluids) nanoparticles (NPs) will bind proteins shaping a ``protein corona'' layer. The long-lived protein layer tightly bound to the NP surface is referred to as the hard corona (HC) and encodes information that controls NP bioactivity (e.g. cellular association, cellular signaling pathways, biodistribution, and toxicity). Decrypting this complex code has become a priority to predict the NP biological outcomes. Here, we use a library of 16 lipid NPs of varying size (Ø ~ 100-250 nm) and surface chemistry (unmodified and PEGylated) to investigate the relationships between NP physicochemical properties (nanoparticle size, aggregation state and surface charge), protein corona fingerprints (PCFs), and NP-cell association. We found out that none of the NPs' physicochemical properties alone was exclusively able to account for association with human cervical cancer cell line (HeLa). For the entire library of NPs, a total of 436 distinct serum proteins were detected. We developed a predictive-validation modeling that provides a means of assessing the relative significance of the identified corona proteins. Interestingly, a minor fraction of the HC, which consists of only 8 PCFs were identified as main promoters of NP association with HeLa cells. Remarkably, identified PCFs have several receptors with high level of expression on the plasma membrane of HeLa cells. Electronic supplementary information (ESI) available: Table S1. Cell viability (%) and cell association of the different nanoparticles used. Table S2. Total number of identified proteins on the different nanoparticles used. Tables S3-S18. Top 25 most abundant corona proteins identified in the protein corona of nanoparticles NP2-NP16 following 1 hour incubation with HP. Table S19. List of descriptors used. Table S20. Potential targets of protein corona fingerprints with its own interaction score (mentha) and the expression median value in Hela cells. Fig. S1 and S2. Effect of exposure to human plasma on size and zeta potential of NPs. Fig. S3. Predictive modeling of nanoparticle-cell association. See DOI: 10.1039/c6nr03898k
Optimization of liquid media and biosafety assessment for algae-lysing bacterium NP23.
Liao, Chunli; Liu, Xiaobo; Shan, Linna
2014-09-01
To control algal bloom caused by nutrient pollution, a wild-type algae-lysing bacterium was isolated from the Baiguishan reservoir in Henan province of China and identified as Enterobacter sp. strain NP23. Algal culture medium was optimized by applying a Placket-Burman design to obtain a high cell concentration of NP23. Three minerals (i.e., 0.6% KNO3, 0.001% MnSO4·H2O, and 0.3% K2HPO4) were found to be independent factors critical for obtaining the highest cell concentration of 10(13) CFU/mL, which was 10(4) times that of the control. In the algae-lysing experiment, the strain exhibited a high lysis rate for the 4 algae test species, namely, Chlorella vulgari, Scenedesmus, Microcystis wesenbergii, and Chlorella pyrenoidosa. Acute toxicity and mutagenicity tests showed that the bacterium NP23 had no toxic and mutagenic effects on fish, even in large doses such as 10(7) or 10(9) CFU/mL. Thus, Enterobacter sp. strain NP23 has strong potential application in the microbial algae-lysing project.
Blue light potentiates neurogenesis induced by retinoic acid-loaded responsive nanoparticles.
Santos, Tiago; Ferreira, Raquel; Quartin, Emanuel; Boto, Carlos; Saraiva, Cláudia; Bragança, José; Peça, João; Rodrigues, Cecília; Ferreira, Lino; Bernardino, Liliana
2017-09-01
Neurogenic niches constitute a powerful endogenous source of new neurons that can be used for brain repair strategies. Neuronal differentiation of these cells can be regulated by molecules such as retinoic acid (RA) or by mild levels of reactive oxygen species (ROS) that are also known to upregulate RA receptor alpha (RARα) levels. Data showed that neural stem cells from the subventricular zone (SVZ) exposed to blue light (405nm laser) transiently induced NADPH oxidase-dependent ROS, resulting in β-catenin activation and neuronal differentiation, and increased RARα levels. Additionally, the same blue light stimulation was capable of triggering the release of RA from light-responsive nanoparticles (LR-NP). The synergy between blue light and LR-NP led to amplified neurogenesis both in vitro and in vivo, while offering a temporal and spatial control of RA release. In conclusion, this combinatory treatment offers great advantages to potentiate neuronal differentiation, and provides an innovative and efficient application for brain regenerative therapies. Controlling the differentiation of stem cells would support the development of promising brain regenerative therapies. Blue light transiently increased reactive oxygen species, resulting in neuronal differentiation and increased retinoic acid receptor (RARα) levels. Additionally, the same blue light stimulation was capable of triggering the release of RA from light-responsive nanoparticles (LR-NP). The synergy between blue light and LR-NP led to amplified neurogenesis, while offering a temporal and spatial control of RA release. In this sense, our approach relying on the modulation of endogenous stem cells for the generation of new neurons may support the development of novel clinical therapies. Copyright © 2017 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Ghaedi, M; Ghaedi, A M; Ansari, A; Mohammadi, F; Vafaei, A
2014-11-11
The influence of variables, namely initial dye concentration, adsorbent dosage (g), stirrer speed (rpm) and contact time (min) on the removal of methyl orange (MO) by gold nanoparticles loaded on activated carbon (Au-NP-AC) and Tamarisk were investigated using multiple linear regression (MLR) and artificial neural network (ANN) and the variables were optimized by partial swarm optimization (PSO). Comparison of the results achieved using proposed models, showed the ANN model was better than the MLR model for prediction of methyl orange removal using Au-NP-AC and Tamarisk. Using the optimal ANN model the coefficient of determination (R2) for the test data set were 0.958 and 0.989; mean squared error (MSE) values were 0.00082 and 0.0006 for Au-NP-AC and Tamarisk adsorbent, respectively. In this study a novel and green approach were reported for the synthesis of gold nanoparticle and activated carbon by Tamarisk. This material was characterized using different techniques such as SEM, TEM, XRD and BET. The usability of Au-NP-AC and activated carbon (AC) Tamarisk for the methyl orange from aqueous solutions was investigated. The effect of variables such as pH, initial dye concentration, adsorbent dosage (g) and contact time (min) on methyl orange removal were studied. Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo-first order, pseudo-second order, Elovich and intraparticle diffusion models indicate that the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed Au-NP-AC and activated carbon (0.015 g and 0.75 g) is applicable for successful removal of methyl orange (>98%) in short time (20 min for Au-NP-AC and 45 min for Tamarisk-AC) with high adsorption capacity 161 mg g(-1) for Au-NP-AC and 3.84 mg g(-1) for Tamarisk-AC. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghaedi, M.; Ghaedi, A. M.; Ansari, A.; Mohammadi, F.; Vafaei, A.
2014-11-01
The influence of variables, namely initial dye concentration, adsorbent dosage (g), stirrer speed (rpm) and contact time (min) on the removal of methyl orange (MO) by gold nanoparticles loaded on activated carbon (Au-NP-AC) and Tamarisk were investigated using multiple linear regression (MLR) and artificial neural network (ANN) and the variables were optimized by partial swarm optimization (PSO). Comparison of the results achieved using proposed models, showed the ANN model was better than the MLR model for prediction of methyl orange removal using Au-NP-AC and Tamarisk. Using the optimal ANN model the coefficient of determination (R2) for the test data set were 0.958 and 0.989; mean squared error (MSE) values were 0.00082 and 0.0006 for Au-NP-AC and Tamarisk adsorbent, respectively. In this study a novel and green approach were reported for the synthesis of gold nanoparticle and activated carbon by Tamarisk. This material was characterized using different techniques such as SEM, TEM, XRD and BET. The usability of Au-NP-AC and activated carbon (AC) Tamarisk for the methyl orange from aqueous solutions was investigated. The effect of variables such as pH, initial dye concentration, adsorbent dosage (g) and contact time (min) on methyl orange removal were studied. Fitting the experimental equilibrium data to various isotherm models such as Langmuir, Freundlich, Tempkin and Dubinin-Radushkevich models show the suitability and applicability of the Langmuir model. Kinetic models such as pseudo-first order, pseudo-second order, Elovich and intraparticle diffusion models indicate that the second-order equation and intraparticle diffusion models control the kinetic of the adsorption process. The small amount of proposed Au-NP-AC and activated carbon (0.015 g and 0.75 g) is applicable for successful removal of methyl orange (>98%) in short time (20 min for Au-NP-AC and 45 min for Tamarisk-AC) with high adsorption capacity 161 mg g-1 for Au-NP-AC and 3.84 mg g-1 for Tamarisk-AC.
Improved hybrid optimization algorithm for 3D protein structure prediction.
Zhou, Changjun; Hou, Caixia; Wei, Xiaopeng; Zhang, Qiang
2014-07-01
A new improved hybrid optimization algorithm - PGATS algorithm, which is based on toy off-lattice model, is presented for dealing with three-dimensional protein structure prediction problems. The algorithm combines the particle swarm optimization (PSO), genetic algorithm (GA), and tabu search (TS) algorithms. Otherwise, we also take some different improved strategies. The factor of stochastic disturbance is joined in the particle swarm optimization to improve the search ability; the operations of crossover and mutation that are in the genetic algorithm are changed to a kind of random liner method; at last tabu search algorithm is improved by appending a mutation operator. Through the combination of a variety of strategies and algorithms, the protein structure prediction (PSP) in a 3D off-lattice model is achieved. The PSP problem is an NP-hard problem, but the problem can be attributed to a global optimization problem of multi-extremum and multi-parameters. This is the theoretical principle of the hybrid optimization algorithm that is proposed in this paper. The algorithm combines local search and global search, which overcomes the shortcoming of a single algorithm, giving full play to the advantage of each algorithm. In the current universal standard sequences, Fibonacci sequences and real protein sequences are certified. Experiments show that the proposed new method outperforms single algorithms on the accuracy of calculating the protein sequence energy value, which is proved to be an effective way to predict the structure of proteins.
Hybrid Nested Partitions and Math Programming Framework for Large-scale Combinatorial Optimization
2010-03-31
optimization problems: 1) exact algorithms and 2) metaheuristic algorithms . This project will integrate concepts from these two technologies to develop...optimal solutions within an acceptable amount of computation time, and 2) metaheuristic algorithms such as genetic algorithms , tabu search, and the...integer programming decomposition approaches, such as Dantzig Wolfe decomposition and Lagrangian relaxation, and metaheuristics such as the Nested
Li, Pei; Gan, Yibo; Wang, Haoming; Xu, Yuan; Song, Lei; Wang, Liyuan; Ouyang, Bin; Zhou, Qiang
2017-11-01
Various research models have been developed to study the biology of disc cells. Recently, the adult disc nucleus pulposus (NP) has been well studied. However, the immature NP is underinvestigated due to a lack of a suitable model. This study aimed to establish an organ culture of immature porcine disc by optimizing culture conditions and using a self-developed substance exchanger-based bioreactor. Immature porcine discs were first cultured in the bioreactor for 7 days at various levels of glucose (low, medium, high), osmolarity (hypo-, iso-, hyper-) and serum (5, 10, 20%) to determine the respective optimal level. The porcine discs were then cultured under the optimized conditions in the novel bioreactor, and were compared with fresh discs at day 14. For high-glucose, iso-osmolarity, or 10% serum, cell viability, the gene expression profile (for anabolic genes and catabolic genes), and glycosaminoglycan (GAG) and hydroxyproline (HYP) contents were more favorable than for other levels of glucose, osmolarity, and serum. When the immature discs were cultured under the optimized conditions using the novel bioreactor for 14 days, the viability of the immature NP was maintained based on histology, cell viability, GAG and HYP contents, and matrix molecule expression. In conclusion, the viability of the immature NP in organ culture could be maintained under the optimized culture conditions (high-glucose, iso-osmolarity, and 10% serum) in the substance exchanger-based bioreactor. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Development of the PEBLebl Traveling Salesman Problem Computerized Testbed
ERIC Educational Resources Information Center
Mueller, Shane T.; Perelman, Brandon S.; Tan, Yin Yin; Thanasuan, Kejkaew
2015-01-01
The traveling salesman problem (TSP) is a combinatorial optimization problem that requires finding the shortest path through a set of points ("cities") that returns to the starting point. Because humans provide heuristic near-optimal solutions to Euclidean versions of the problem, it has sometimes been used to investigate human visual…
CombiROC: an interactive web tool for selecting accurate marker combinations of omics data.
Mazzara, Saveria; Rossi, Riccardo L; Grifantini, Renata; Donizetti, Simone; Abrignani, Sergio; Bombaci, Mauro
2017-03-30
Diagnostic accuracy can be improved considerably by combining multiple markers, whose performance in identifying diseased subjects is usually assessed via receiver operating characteristic (ROC) curves. The selection of multimarker signatures is a complicated process that requires integration of data signatures with sophisticated statistical methods. We developed a user-friendly tool, called CombiROC, to help researchers accurately determine optimal markers combinations from diverse omics methods. With CombiROC data from different domains, such as proteomics and transcriptomics, can be analyzed using sensitivity/specificity filters: the number of candidate marker panels rising from combinatorial analysis is easily optimized bypassing limitations imposed by the nature of different experimental approaches. Leaving to the user full control on initial selection stringency, CombiROC computes sensitivity and specificity for all markers combinations, performances of best combinations and ROC curves for automatic comparisons, all visualized in a graphic interface. CombiROC was designed without hard-coded thresholds, allowing a custom fit to each specific data: this dramatically reduces the computational burden and lowers the false negative rates given by fixed thresholds. The application was validated with published data, confirming the marker combination already originally described or even finding new ones. CombiROC is a novel tool for the scientific community freely available at http://CombiROC.eu.
Combinatorial therapy discovery using mixed integer linear programming.
Pang, Kaifang; Wan, Ying-Wooi; Choi, William T; Donehower, Lawrence A; Sun, Jingchun; Pant, Dhruv; Liu, Zhandong
2014-05-15
Combinatorial therapies play increasingly important roles in combating complex diseases. Owing to the huge cost associated with experimental methods in identifying optimal drug combinations, computational approaches can provide a guide to limit the search space and reduce cost. However, few computational approaches have been developed for this purpose, and thus there is a great need of new algorithms for drug combination prediction. Here we proposed to formulate the optimal combinatorial therapy problem into two complementary mathematical algorithms, Balanced Target Set Cover (BTSC) and Minimum Off-Target Set Cover (MOTSC). Given a disease gene set, BTSC seeks a balanced solution that maximizes the coverage on the disease genes and minimizes the off-target hits at the same time. MOTSC seeks a full coverage on the disease gene set while minimizing the off-target set. Through simulation, both BTSC and MOTSC demonstrated a much faster running time over exhaustive search with the same accuracy. When applied to real disease gene sets, our algorithms not only identified known drug combinations, but also predicted novel drug combinations that are worth further testing. In addition, we developed a web-based tool to allow users to iteratively search for optimal drug combinations given a user-defined gene set. Our tool is freely available for noncommercial use at http://www.drug.liuzlab.org/. zhandong.liu@bcm.edu Supplementary data are available at Bioinformatics online.
Construction of a scFv Library with Synthetic, Non-combinatorial CDR Diversity.
Bai, Xuelian; Shim, Hyunbo
2017-01-01
Many large synthetic antibody libraries have been designed, constructed, and successfully generated high-quality antibodies suitable for various demanding applications. While synthetic antibody libraries have many advantages such as optimized framework sequences and a broader sequence landscape than natural antibodies, their sequence diversities typically are generated by random combinatorial synthetic processes which cause the incorporation of many undesired CDR sequences. Here, we describe the construction of a synthetic scFv library using oligonucleotide mixtures that contain predefined, non-combinatorially synthesized CDR sequences. Each CDR is first inserted to a master scFv framework sequence and the resulting single-CDR libraries are subjected to a round of proofread panning. The proofread CDR sequences are assembled to produce the final scFv library with six diversified CDRs.
Tag SNP selection via a genetic algorithm.
Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh
2010-10-01
Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.
ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms
NASA Astrophysics Data System (ADS)
Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François
2015-10-01
Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.
Efficient greedy algorithms for economic manpower shift planning
NASA Astrophysics Data System (ADS)
Nearchou, A. C.; Giannikos, I. C.; Lagodimos, A. G.
2015-01-01
Consideration is given to the economic manpower shift planning (EMSP) problem, an NP-hard capacity planning problem appearing in various industrial settings including the packing stage of production in process industries and maintenance operations. EMSP aims to determine the manpower needed in each available workday shift of a given planning horizon so as to complete a set of independent jobs at minimum cost. Three greedy heuristics are presented for the EMSP solution. These practically constitute adaptations of an existing algorithm for a simplified version of EMSP which had shown excellent performance in terms of solution quality and speed. Experimentation shows that the new algorithms perform very well in comparison to the results obtained by both the CPLEX optimizer and an existing metaheuristic. Statistical analysis is deployed to rank the algorithms in terms of their solution quality and to identify the effects that critical planning factors may have on their relative efficiency.
Guo, Hao; Fu, Jing
2013-01-01
Facility location, inventory control, and vehicle routes scheduling are critical and highly related problems in the design of logistics system for e-business. Meanwhile, the return ratio in Internet sales was significantly higher than in the traditional business. Many of returned merchandise have no quality defects, which can reenter sales channels just after a simple repackaging process. Focusing on the existing problem in e-commerce logistics system, we formulate a location-inventory-routing problem model with no quality defects returns. To solve this NP-hard problem, an effective hybrid genetic simulated annealing algorithm (HGSAA) is proposed. Results of numerical examples show that HGSAA outperforms GA on computing time, optimal solution, and computing stability. The proposed model is very useful to help managers make the right decisions under e-supply chain environment. PMID:24489489
An improved genetic algorithm and its application in the TSP problem
NASA Astrophysics Data System (ADS)
Li, Zheng; Qin, Jinlei
2011-12-01
Concept and research actuality of genetic algorithm are introduced in detail in the paper. Under this condition, the simple genetic algorithm and an improved algorithm are described and applied in an example of TSP problem, where the advantage of genetic algorithm is adequately shown in solving the NP-hard problem. In addition, based on partial matching crossover operator, the crossover operator method is improved into extended crossover operator in order to advance the efficiency when solving the TSP. In the extended crossover method, crossover operator can be performed between random positions of two random individuals, which will not be restricted by the position of chromosome. Finally, the nine-city TSP is solved using the improved genetic algorithm with extended crossover method, the efficiency of whose solution process is much higher, besides, the solving speed of the optimal solution is much faster.
Typical performance of approximation algorithms for NP-hard problems
NASA Astrophysics Data System (ADS)
Takabe, Satoshi; Hukushima, Koji
2016-11-01
Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete (Wang and Jiang, 1994). Traditionally, point estimations of hypothetical ancestral sequences have been used to gain heuristic, upper bounds on cladogram cost. These include procedures with such diverse approaches as non-additive optimization of multiple sequence alignment, direct optimization (Wheeler, 1996), and fixed-state character optimization (Wheeler, 1999). A method is proposed here which, by extending fixed-state character optimization, replaces the estimation process with a search. This form of optimization examines a diversity of potential state solutions for cost-efficient hypothetical ancestral sequences and can result in greatly more parsimonious cladograms. Additionally, such an approach can be applied to other NP-complete phylogenetic optimization problems such as genomic break-point analysis. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Directed differentiation of embryonic stem cells using a bead-based combinatorial screening method.
Tarunina, Marina; Hernandez, Diana; Johnson, Christopher J; Rybtsov, Stanislav; Ramathas, Vidya; Jeyakumar, Mylvaganam; Watson, Thomas; Hook, Lilian; Medvinsky, Alexander; Mason, Chris; Choo, Yen
2014-01-01
We have developed a rapid, bead-based combinatorial screening method to determine optimal combinations of variables that direct stem cell differentiation to produce known or novel cell types having pre-determined characteristics. Here we describe three experiments comprising stepwise exposure of mouse or human embryonic cells to 10,000 combinations of serum-free differentiation media, through which we discovered multiple novel, efficient and robust protocols to generate a number of specific hematopoietic and neural lineages. We further demonstrate that the technology can be used to optimize existing protocols in order to substitute costly growth factors with bioactive small molecules and/or increase cell yield, and to identify in vitro conditions for the production of rare developmental intermediates such as an embryonic lymphoid progenitor cell that has not previously been reported.
Liao, Chunli; Liu, Xiaobo
2016-03-01
With the anthropogenic nutrient loading increasing, the frequency and impacts of harmful algal blooms (HABs) have intensified in recent years. To biocontrol HABs, many corresponding algal-lysing bacteria have been exploited successively. However, there are few studies on an effective algal-lysing culture collection to prevent cells from death and particularly the degradation of algicidal ability to their hosts. An optimized cryopreservation was developed and experiments on the validation of this method on preventing algicidal degradation and effects of this optimized cryopreservation on the survival rate of Scenedesmus-lysing bacterium, Enterobacter NP23, isolated from Scenedesmus sp. community, China, on the algicidal dynamic of Scenedesmus wuhanensis was investigated. The optimized cryoprotectant composition consists of 30.0 g/L gelatin, 48.5 g/L sucrose, and 28.4 g/L glycerol, respectively. Using this approach, the survival rate of NP23 cells can still maintain above 90 % and the algal-lysing rate only decline 4 % after the 18-month cryoprotection. Moreover, the 16 generations' passage experiment showed a significant (p < 0.05) genetic stability of algicidal capacity after 18 months. The growth dynamic of S. wuhanensis was investigated in a 5-L bioreactor during 132 h in the absence or presence of NP23. As a result, NP23 has a significant (p < 0.05) inhibition to S. wuhanensis growth when injected into algal culture in the exponential phase at 60th hour. In addition, S. wuhanensis culture initially with NP23 exhibited a slow growth, performing a prolonged lag phase without a clear stationary phase and then rapidly decreased. Our findings, combined with the capacity of preventing the degradation of algicidal ability collectively suggest that the use of this opitimized cryopreservation may be a promising strategy for maintaining algicidal cells.
NASA Astrophysics Data System (ADS)
Apriandanu, D. O. B.; Yulizar, Y.
2017-04-01
Environmentally friendly method for green synthesis of Au nanoparticles (AuNP) using aqueous leaf extract of Tinospora crispa (TLE) was reported. TLE has the ability for reducing and capping AuNP. Identification of active compounds in aqueous leaf extract was obtained by phytochemical analysis and Fourier transform infrared spectroscopy (FTIR). The AuNP-TLE growth was characterized using UV-Vis spectrophotometer. The particle size and the distribution of AuNP were confirmed by particle size analyzer (PSA). AuNP-TLE formation was optimized by varying the extract concentration and time of the synthesis process. UV-Vis absorption spectrum of optimum AuNP formation displayed by the surface plasmon resonance at maximum wavelength of λmax 536 nm. The PSA result showed that AuNP has size distribution of 80.60 nm and stable up to 21 days. TEM images showed that the size of the AuNP is ± 25 nm.
Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel
Sakin, Sayef Azad; Alamri, Atif; Tran, Nguyen H.
2017-01-01
Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies. PMID:29215591
Self-Coexistence among IEEE 802.22 Networks: Distributed Allocation of Power and Channel.
Sakin, Sayef Azad; Razzaque, Md Abdur; Hassan, Mohammad Mehedi; Alamri, Atif; Tran, Nguyen H; Fortino, Giancarlo
2017-12-07
Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies.
Anatomy of the Attraction Basins: Breaking with the Intuition.
Hernando, Leticia; Mendiburu, Alexander; Lozano, Jose A
2018-05-22
Solving combinatorial optimization problems efficiently requires the development of algorithms that consider the specific properties of the problems. In this sense, local search algorithms are designed over a neighborhood structure that partially accounts for these properties. Considering a neighborhood, the space is usually interpreted as a natural landscape, with valleys and mountains. Under this perception, it is commonly believed that, if maximizing, the solutions located in the slopes of the same mountain belong to the same attraction basin, with the peaks of the mountains being the local optima. Unfortunately, this is a widespread erroneous visualization of a combinatorial landscape. Thus, our aim is to clarify this aspect, providing a detailed analysis of, first, the existence of plateaus where the local optima are involved, and second, the properties that define the topology of the attraction basins, picturing a reliable visualization of the landscapes. Some of the features explored in this paper have never been examined before. Hence, new findings about the structure of the attraction basins are shown. The study is focused on instances of permutation-based combinatorial optimization problems considering the 2-exchange and the insert neighborhoods. As a consequence of this work, we break away from the extended belief about the anatomy of attraction basins.
In situ frustum indentation of nanoporous copper thin films
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ran; Pathak, Siddhartha; Mook, William M.
Mechanical properties of thin films are often obtained solely from nanoindentation. At the same time, such measurements are characterized by a substantial amount of uncertainty, especially when mean pressure or hardness are used to infer uniaxial yield stress. In this paper we demonstrate that indentation with a pyramidal flat tip (frustum) indenter near the free edge of a sample can provide a significantly better estimate of the uniaxial yield strength compared to frequently used Berkovich indenter. This is first demonstrated using a numerical model for a material with an isotropic pressure sensitive yield criterion. Numerical simulations confirm that the indentermore » geometry provides a clear distinction of the mean pressure at which a material transitions to inelastic behavior. The mean critical pressure is highly dependent on the plastic Poisson ratio ν p so that at the 1% offset of normalized indent depth, the critical pressure p m c normalized to the uniaxial yield strength σ 0 is 1 < p m c/σ 0 < 1.3 for materials with 0 < ν p < 0.5. Choice of a frustum over Berkovich indenter reduces uncertainty in hardness by a factor of 3. These results are used to interpret frustum indentation experiments on nanoporous (NP) Copper with struts of typical diameter of 45 nm. An estimate of the yield strength of NP Copper is obtained 230 MPa < σ 0 < 300 MPa. Edge indentation further allows one to obtain in-plane strain maps near the critical pressure. Finally, comparison of the experimentally obtained in-plane strain maps of NP Cu during deformation and the strain field for different plastic Poisson ratios suggest that this material has a plastic Poisson ratio of the order of 0.2–0.3. However, existing constitutive models may not adequately capture post-yield behavior of NP metals.« less
Evidence for TiO2 nanoparticle transfer in a hard-rock aquifer.
Cary, Lise; Pauwels, Hélène; Ollivier, Patrick; Picot, Géraldine; Leroy, Philippe; Mougin, Bruno; Braibant, Gilles; Labille, Jérôme
2015-08-01
Water flow and TiO2 nanoparticle (NP) transfer in a fractured hard-rock aquifer were studied in a tracer test experiment at a pilot site in Brittany, France. Results from the Br tracer test show that the schist aquifer can be represented by a two-layer medium comprising i) fractures with low longitudinal dispersivity in which water and solute transport is relatively fast, and ii) a network of small fissures with high longitudinal dispersivity in which transport is slower. Although a large amount of NPs was retained within the aquifer, a significant TiO2 concentration was measured in a well 15m downstream of the NP injection well, clearly confirming the potential for TiO2 NPs to be transported in groundwater. The Ti concentration profile in the downstream well was modelled using a two-layer medium approach. The delay used for the TiO2 NPs simulation compared to the Br concentration profiles in the downstream well indicate that the aggregated TiO2 NPs interacted with the rock. Unlike Br, NPs do not penetrate the entire pore network during transfer because of electrostatic interactions between NP aggregates and the rock and also to the aggregate size and the hydrodynamic conditions, especially where the porosity is very low; NPs with a weak negative charge can be attached onto the rock surface, and more particularly onto the positively charged iron oxyhydroxides coating the main pathways due to natural denitrification. Nevertheless, TiO2 NPs are mobile and transfer within fracture and fissure media. Any modification of the aquifer's chemical conditions is likely to impact the groundwater pH and, the nitrate content and the denitrification process, and thus affect NP aggregation and attachment. Copyright © 2015 Elsevier B.V. All rights reserved.
In situ frustum indentation of nanoporous copper thin films
Liu, Ran; Pathak, Siddhartha; Mook, William M.; ...
2017-07-24
Mechanical properties of thin films are often obtained solely from nanoindentation. At the same time, such measurements are characterized by a substantial amount of uncertainty, especially when mean pressure or hardness are used to infer uniaxial yield stress. In this paper we demonstrate that indentation with a pyramidal flat tip (frustum) indenter near the free edge of a sample can provide a significantly better estimate of the uniaxial yield strength compared to frequently used Berkovich indenter. This is first demonstrated using a numerical model for a material with an isotropic pressure sensitive yield criterion. Numerical simulations confirm that the indentermore » geometry provides a clear distinction of the mean pressure at which a material transitions to inelastic behavior. The mean critical pressure is highly dependent on the plastic Poisson ratio ν p so that at the 1% offset of normalized indent depth, the critical pressure p m c normalized to the uniaxial yield strength σ 0 is 1 < p m c/σ 0 < 1.3 for materials with 0 < ν p < 0.5. Choice of a frustum over Berkovich indenter reduces uncertainty in hardness by a factor of 3. These results are used to interpret frustum indentation experiments on nanoporous (NP) Copper with struts of typical diameter of 45 nm. An estimate of the yield strength of NP Copper is obtained 230 MPa < σ 0 < 300 MPa. Edge indentation further allows one to obtain in-plane strain maps near the critical pressure. Finally, comparison of the experimentally obtained in-plane strain maps of NP Cu during deformation and the strain field for different plastic Poisson ratios suggest that this material has a plastic Poisson ratio of the order of 0.2–0.3. However, existing constitutive models may not adequately capture post-yield behavior of NP metals.« less
Abdulhamid, Shafi’i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques. PMID:27384239
Abdulhamid, Shafi'i Muhammad; Abd Latiff, Muhammad Shafie; Abdul-Salaam, Gaddafi; Hussain Madni, Syed Hamid
2016-01-01
Cloud computing system is a huge cluster of interconnected servers residing in a datacenter and dynamically provisioned to clients on-demand via a front-end interface. Scientific applications scheduling in the cloud computing environment is identified as NP-hard problem due to the dynamic nature of heterogeneous resources. Recently, a number of metaheuristics optimization schemes have been applied to address the challenges of applications scheduling in the cloud system, without much emphasis on the issue of secure global scheduling. In this paper, scientific applications scheduling techniques using the Global League Championship Algorithm (GBLCA) optimization technique is first presented for global task scheduling in the cloud environment. The experiment is carried out using CloudSim simulator. The experimental results show that, the proposed GBLCA technique produced remarkable performance improvement rate on the makespan that ranges between 14.44% to 46.41%. It also shows significant reduction in the time taken to securely schedule applications as parametrically measured in terms of the response time. In view of the experimental results, the proposed technique provides better-quality scheduling solution that is suitable for scientific applications task execution in the Cloud Computing environment than the MinMin, MaxMin, Genetic Algorithm (GA) and Ant Colony Optimization (ACO) scheduling techniques.
Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny
NASA Astrophysics Data System (ADS)
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.
Deng, Qianwang; Gong, Guiliang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N , in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed.
Deng, Qianwang; Gong, Xuran; Zhang, Like; Liu, Wei; Ren, Qinghua
2017-01-01
Flexible job-shop scheduling problem (FJSP) is an NP-hard puzzle which inherits the job-shop scheduling problem (JSP) characteristics. This paper presents a bee evolutionary guiding nondominated sorting genetic algorithm II (BEG-NSGA-II) for multiobjective FJSP (MO-FJSP) with the objectives to minimize the maximal completion time, the workload of the most loaded machine, and the total workload of all machines. It adopts a two-stage optimization mechanism during the optimizing process. In the first stage, the NSGA-II algorithm with T iteration times is first used to obtain the initial population N, in which a bee evolutionary guiding scheme is presented to exploit the solution space extensively. In the second stage, the NSGA-II algorithm with GEN iteration times is used again to obtain the Pareto-optimal solutions. In order to enhance the searching ability and avoid the premature convergence, an updating mechanism is employed in this stage. More specifically, its population consists of three parts, and each of them changes with the iteration times. What is more, numerical simulations are carried out which are based on some published benchmark instances. Finally, the effectiveness of the proposed BEG-NSGA-II algorithm is shown by comparing the experimental results and the results of some well-known algorithms already existed. PMID:28458687
Wang, Peng; Zhang, Cheng; Liu, Hong-Wen; Xiong, Mengyi; Yin, Sheng-Yan; Yang, Yue; Hu, Xiao-Xiao; Yin, Xia; Zhang, Xiao-Bing; Tan, Weihong
2017-12-01
Fluorescence quantitative analyses for vital biomolecules are in great demand in biomedical science owing to their unique detection advantages with rapid, sensitive, non-damaging and specific identification. However, available fluorescence strategies for quantitative detection are usually hard to design and achieve. Inspired by supramolecular chemistry, a two-photon-excited fluorescent supramolecular nanoplatform ( TPSNP ) was designed for quantitative analysis with three parts: host molecules (β-CD polymers), a guest fluorophore of sensing probes (Np-Ad) and a guest internal reference (NpRh-Ad). In this strategy, the TPSNP possesses the merits of (i) improved water-solubility and biocompatibility; (ii) increased tissue penetration depth for bioimaging by two-photon excitation; (iii) quantitative and tunable assembly of functional guest molecules to obtain optimized detection conditions; (iv) a common approach to avoid the limitation of complicated design by adjustment of sensing probes; and (v) accurate quantitative analysis by virtue of reference molecules. As a proof-of-concept, we utilized the two-photon fluorescent probe NHS-Ad-based TPSNP-1 to realize accurate quantitative analysis of hydrogen sulfide (H 2 S), with high sensitivity and good selectivity in live cells, deep tissues and ex vivo -dissected organs, suggesting that the TPSNP is an ideal quantitative indicator for clinical samples. What's more, TPSNP will pave the way for designing and preparing advanced supramolecular sensors for biosensing and biomedicine.
Fast and Efficient Discrimination of Traveling Salesperson Problem Stimulus Difficulty
ERIC Educational Resources Information Center
Dry, Matthew J.; Fontaine, Elizabeth L.
2014-01-01
The Traveling Salesperson Problem (TSP) is a computationally difficult combinatorial optimization problem. In spite of its relative difficulty, human solvers are able to generate close-to-optimal solutions in a close-to-linear time frame, and it has been suggested that this is due to the visual system's inherent sensitivity to certain geometric…
NASA Astrophysics Data System (ADS)
Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.
2012-09-01
Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.
Limpoco, F Ted; Bailey, Ryan C
2011-09-28
We directly monitor in parallel and in real time the temporal profiles of polymer brushes simultaneously grown via multiple ATRP reaction conditions on a single substrate using arrays of silicon photonic microring resonators. In addition to probing relative polymerization rates, we show the ability to evaluate the dynamic properties of the in situ grown polymers. This presents a powerful new platform for studying modified interfaces that may allow for the combinatorial optimization of surface-initiated polymerization conditions.
Efficient search, mapping, and optimization of multi-protein genetic systems in diverse bacteria
Farasat, Iman; Kushwaha, Manish; Collens, Jason; Easterbrook, Michael; Guido, Matthew; Salis, Howard M
2014-01-01
Developing predictive models of multi-protein genetic systems to understand and optimize their behavior remains a combinatorial challenge, particularly when measurement throughput is limited. We developed a computational approach to build predictive models and identify optimal sequences and expression levels, while circumventing combinatorial explosion. Maximally informative genetic system variants were first designed by the RBS Library Calculator, an algorithm to design sequences for efficiently searching a multi-protein expression space across a > 10,000-fold range with tailored search parameters and well-predicted translation rates. We validated the algorithm's predictions by characterizing 646 genetic system variants, encoded in plasmids and genomes, expressed in six gram-positive and gram-negative bacterial hosts. We then combined the search algorithm with system-level kinetic modeling, requiring the construction and characterization of 73 variants to build a sequence-expression-activity map (SEAMAP) for a biosynthesis pathway. Using model predictions, we designed and characterized 47 additional pathway variants to navigate its activity space, find optimal expression regions with desired activity response curves, and relieve rate-limiting steps in metabolism. Creating sequence-expression-activity maps accelerates the optimization of many protein systems and allows previous measurements to quantitatively inform future designs. PMID:24952589
Mondal, Milon; Radeva, Nedyalka; Fanlo‐Virgós, Hugo; Otto, Sijbren; Klebe, Gerhard
2016-01-01
Abstract Fragment‐based drug design (FBDD) affords active compounds for biological targets. While there are numerous reports on FBDD by fragment growing/optimization, fragment linking has rarely been reported. Dynamic combinatorial chemistry (DCC) has become a powerful hit‐identification strategy for biological targets. We report the synergistic combination of fragment linking and DCC to identify inhibitors of the aspartic protease endothiapepsin. Based on X‐ray crystal structures of endothiapepsin in complex with fragments, we designed a library of bis‐acylhydrazones and used DCC to identify potent inhibitors. The most potent inhibitor exhibits an IC50 value of 54 nm, which represents a 240‐fold improvement in potency compared to the parent hits. Subsequent X‐ray crystallography validated the predicted binding mode, thus demonstrating the efficiency of the combination of fragment linking and DCC as a hit‐identification strategy. This approach could be applied to a range of biological targets, and holds the potential to facilitate hit‐to‐lead optimization. PMID:27400756
Unprovable Security of Two-Message Zero Knowledge
2012-12-19
functions on NP-hardness. In STOC ’06, pages 701–710, 2006. 10 [BCC88] Gilles Brassard, David Chaum , and Claude Crépeau. Minimum disclosure proofs...security of several primitives (e.g., Schnorr’s identification scheme, commitment schemes secure under weak notions of selective open- ing, Chaum blind
A Decomposition Approach for Shipboard Manpower Scheduling
2009-01-01
generalizes the bin-packing problem with no conflicts ( BPP ) which is known to be NP-hard (Garey and Johnson 1979). Hence our focus is to obtain a lower...to the BPP ; while the so called constrained packing lower bound also takes conflict constraints into account. Their computational study indicates
NASA Astrophysics Data System (ADS)
Khehra, Baljit Singh; Pharwaha, Amar Partap Singh
2017-04-01
Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.
Azimi, Sayyed M; Sheridan, Steven D; Ghannad-Rezaie, Mostafa; Eimon, Peter M; Yanik, Mehmet Fatih
2018-05-01
Identification of optimal transcription-factor expression patterns to direct cellular differentiation along a desired pathway presents significant challenges. We demonstrate massively combinatorial screening of temporally-varying mRNA transcription factors to direct differentiation of neural progenitor cells using a dynamically-reconfigurable magnetically-guided spotting technology for localizing mRNA, enabling experiments on millimetre size spots. In addition, we present a time-interleaved delivery method that dramatically reduces fluctuations in the delivered transcription-factor copy-numbers per cell. We screened combinatorial and temporal delivery of a pool of midbrain-specific transcription factors to augment the generation of dopaminergic neurons. We show that the combinatorial delivery of LMX1A, FOXA2 and PITX3 is highly effective in generating dopaminergic neurons from midbrain progenitors. We show that LMX1A significantly increases TH -expression levels when delivered to neural progenitor cells either during proliferation or after induction of neural differentiation, while FOXA2 and PITX3 increase expression only when delivered prior to induction, demonstrating temporal dependence of factor addition. © 2018, Azimi et al.
Gatidou, Georgia; Thomaidis, Nikolaos S; Stasinakis, Athanasios S; Lekkas, Themistokles D
2007-01-05
An integrated analytical method for the simultaneous determination of 4-n-nonylphenol (4-n-NP), nonylphenol monoethoxylate (NP1EO), nonylphenol diethoxylate (NP2EO), bisphenol A (BPA) and triclosan (TCS) in wastewater (dissolved and particulate phase) and sewage sludge was developed based on gas chromatography-mass spectrometry. Chromatographic analysis was achieved after derivatization with bis(trimethylsilyl)trifluoroacetamide (BSTFA). Extraction from water samples was performed by solid-phase extraction (SPE). The optimization of SPE procedure included the type of sorbent and the type of the organic solvent used for the elution. Referred to solid samples, the target compounds were extracted by sonication. In this case the optimization of the extraction procedure included the variation of the amount of the extracted biomass, the duration and the temperature of sonication and the type of the extraction organic solvent. The developed extraction procedures resulted in good repeatability and reproducibility with relative standard deviations (RSDs) less than 13% for all the tested compounds for both types of samples. Satisfactory recoveries were obtained (>60%) for all the compounds in both liquid and solid samples, except for 4-n-NP, which gave recoveries up to 35% in wastewater samples and up to 63% in sludge samples. The limits of detection (LODs) of the target compounds varied from 0.03 (4-n-NP) to 0.41 microg l(-1) (NP2EO) and from 0.04 (4-n-NP) to 0.96 microg kg(-1) (NP2EO) for liquid and solid samples, respectively. The developed methods were successfully applied to the analysis of the target compounds in real samples.
1990-04-01
Np - Yaw to Roll Derivative (Yaw/Roll Coupling) Hr - Yaw to Yaw Rate Derivative (Yaw Damping) M - Pitch to Incidence Derivative (Incidence Stability) F...system is installed in the same configuration as used on the land based vehicle, simply by bolting-on to the limited number of available hard points...helicopters not originally designed to take external loads. The limited number of hard points on the fuselage structure leads to define supports of complex
Scheduling in the Face of Uncertain Resource Consumption and Utility
NASA Technical Reports Server (NTRS)
Koga, Dennis (Technical Monitor); Frank, Jeremy; Dearden, Richard
2003-01-01
We discuss the problem of scheduling tasks that consume a resource with known capacity and where the tasks have varying utility. We consider problems in which the resource consumption and utility of each activity is described by probability distributions. In these circumstances, we would like to find schedules that exceed a lower bound on the expected utility when executed. We first show that while some of these problems are NP-complete, others are only NP-Hard. We then describe various heuristic search algorithms to solve these problems and their drawbacks. Finally, we present empirical results that characterize the behavior of these heuristics over a variety of problem classes.
Dynamical analysis of continuous higher-order hopfield networks for combinatorial optimization.
Atencia, Miguel; Joya, Gonzalo; Sandoval, Francisco
2005-08-01
In this letter, the ability of higher-order Hopfield networks to solve combinatorial optimization problems is assessed by means of a rigorous analysis of their properties. The stability of the continuous network is almost completely clarified: (1) hyperbolic interior equilibria, which are unfeasible, are unstable; (2) the state cannot escape from the unitary hypercube; and (3) a Lyapunov function exists. Numerical methods used to implement the continuous equation on a computer should be designed with the aim of preserving these favorable properties. The case of nonhyperbolic fixed points, which occur when the Hessian of the target function is the null matrix, requires further study. We prove that these nonhyperbolic interior fixed points are unstable in networks with three neurons and order two. The conjecture that interior equilibria are unstable in the general case is left open.
Directed Differentiation of Embryonic Stem Cells Using a Bead-Based Combinatorial Screening Method
Tarunina, Marina; Hernandez, Diana; Johnson, Christopher J.; Rybtsov, Stanislav; Ramathas, Vidya; Jeyakumar, Mylvaganam; Watson, Thomas; Hook, Lilian; Medvinsky, Alexander; Mason, Chris; Choo, Yen
2014-01-01
We have developed a rapid, bead-based combinatorial screening method to determine optimal combinations of variables that direct stem cell differentiation to produce known or novel cell types having pre-determined characteristics. Here we describe three experiments comprising stepwise exposure of mouse or human embryonic cells to 10,000 combinations of serum-free differentiation media, through which we discovered multiple novel, efficient and robust protocols to generate a number of specific hematopoietic and neural lineages. We further demonstrate that the technology can be used to optimize existing protocols in order to substitute costly growth factors with bioactive small molecules and/or increase cell yield, and to identify in vitro conditions for the production of rare developmental intermediates such as an embryonic lymphoid progenitor cell that has not previously been reported. PMID:25251366
Combinatorial optimization problem solution based on improved genetic algorithm
NASA Astrophysics Data System (ADS)
Zhang, Peng
2017-08-01
Traveling salesman problem (TSP) is a classic combinatorial optimization problem. It is a simplified form of many complex problems. In the process of study and research, it is understood that the parameters that affect the performance of genetic algorithm mainly include the quality of initial population, the population size, and crossover probability and mutation probability values. As a result, an improved genetic algorithm for solving TSP problems is put forward. The population is graded according to individual similarity, and different operations are performed to different levels of individuals. In addition, elitist retention strategy is adopted at each level, and the crossover operator and mutation operator are improved. Several experiments are designed to verify the feasibility of the algorithm. Through the experimental results analysis, it is proved that the improved algorithm can improve the accuracy and efficiency of the solution.
On the Critical Behaviour, Crossover Point and Complexity of the Exact Cover Problem
NASA Technical Reports Server (NTRS)
Morris, Robin D.; Smelyanskiy, Vadim N.; Shumow, Daniel; Koga, Dennis (Technical Monitor)
2003-01-01
Research into quantum algorithms for NP-complete problems has rekindled interest in the detailed study a broad class of combinatorial problems. A recent paper applied the quantum adiabatic evolution algorithm to the Exact Cover problem for 3-sets (EC3), and provided an empirical evidence that the algorithm was polynomial. In this paper we provide a detailed study of the characteristics of the exact cover problem. We present the annealing approximation applied to EC3, which gives an over-estimate of the phase transition point. We also identify empirically the phase transition point. We also study the complexity of two classical algorithms on this problem: Davis-Putnam and Simulated Annealing. For these algorithms, EC3 is significantly easier than 3-SAT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shendruk, Tyler N., E-mail: tyler.shendruk@physics.ox.ac.uk; Bertrand, Martin; Harden, James L.
2014-12-28
Given the ubiquity of depletion effects in biological and other soft matter systems, it is desirable to have coarse-grained Molecular Dynamics (MD) simulation approaches appropriate for the study of complex systems. This paper examines the use of two common truncated Lennard-Jones (Weeks-Chandler-Andersen (WCA)) potentials to describe a pair of colloidal particles in a thermal bath of depletants. The shifted-WCA model is the steeper of the two repulsive potentials considered, while the combinatorial-WCA model is the softer. It is found that the depletion-induced well depth for the combinatorial-WCA model is significantly deeper than the shifted-WCA model because the resulting overlap ofmore » the colloids yields extra accessible volume for depletants. For both shifted- and combinatorial-WCA simulations, the second virial coefficients and pair potentials between colloids are demonstrated to be well approximated by the Morphometric Thermodynamics (MT) model. This agreement suggests that the presence of depletants can be accurately modelled in MD simulations by implicitly including them through simple, analytical MT forms for depletion-induced interactions. Although both WCA potentials are found to be effective generic coarse-grained simulation approaches for studying depletion effects in complicated soft matter systems, combinatorial-WCA is the more efficient approach as depletion effects are enhanced at lower depletant densities. The findings indicate that for soft matter systems that are better modelled by potentials with some compressibility, predictions from hard-sphere systems could greatly underestimate the magnitude of depletion effects at a given depletant density.« less
Carbon-coated nanoparticle superlattices for energy applications
NASA Astrophysics Data System (ADS)
Li, Jun; Yiliguma, Affa; Wang, Yifei; Zheng, Gengfeng
2016-07-01
Nanoparticle (NP) superlattices represent a unique material architecture for energy conversion and storage. Recent reports on carbon-coated NP superlattices have shown exciting electrochemical properties attributed to their rationally designed compositions and structures, fast electron transport, short diffusion length, and abundant reactive sites via enhanced coupling between close-packed NPs, which are distinctive from their isolated or disordered NP or bulk counterparts. In this minireview, we summarize the recent developments of highly-ordered and interconnected carbon-coated NP superlattices featuring high surface area, tailorable and uniform doping, high conductivity, and structure stability. We then introduce the precisely-engineered NP superlattices by tuning/studying specific aspects, including intermetallic structures, long-range ordering control, and carbon coating methods. In addition, these carbon-coated NP superlattices exhibit promising characteristics in energy-oriented applications, in particular, in the fields of lithium-ion batteries, fuel cells, and electrocatalysis. Finally, the challenges and perspectives are discussed to further explore the carbon-coated NP superlattices for optimized electrochemical performances.
Quantum versus simulated annealing in wireless interference network optimization.
Wang, Chi; Chen, Huo; Jonckheere, Edmond
2016-05-16
Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking-more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.
Quantum versus simulated annealing in wireless interference network optimization
Wang, Chi; Chen, Huo; Jonckheere, Edmond
2016-01-01
Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed. PMID:27181056
NASA Astrophysics Data System (ADS)
Kim, Hyo-Su; Kim, Dong-Hoi
The dynamic channel allocation (DCA) scheme in multi-cell systems causes serious inter-cell interference (ICI) problem to some existing calls when channels for new calls are allocated. Such a problem can be addressed by advanced centralized DCA design that is able to minimize ICI. Thus, in this paper, a centralized DCA is developed for the downlink of multi-cell orthogonal frequency division multiple access (OFDMA) systems with full spectral reuse. However, in practice, as the search space of channel assignment for centralized DCA scheme in multi-cell systems grows exponentially with the increase of the number of required calls, channels, and cells, it becomes an NP-hard problem and is currently too complicated to find an optimum channel allocation. In this paper, we propose an ant colony optimization (ACO) based DCA scheme using a low-complexity ACO algorithm which is a kind of heuristic algorithm in order to solve the aforementioned problem. Simulation results demonstrate significant performance improvements compared to the existing schemes in terms of the grade of service (GoS) performance and the forced termination probability of existing calls without degrading the system performance of the average throughput.
Quantum versus simulated annealing in wireless interference network optimization
NASA Astrophysics Data System (ADS)
Wang, Chi; Chen, Huo; Jonckheere, Edmond
2016-05-01
Quantum annealing (QA) serves as a specialized optimizer that is able to solve many NP-hard problems and that is believed to have a theoretical advantage over simulated annealing (SA) via quantum tunneling. With the introduction of the D-Wave programmable quantum annealer, a considerable amount of effort has been devoted to detect and quantify quantum speedup. While the debate over speedup remains inconclusive as of now, instead of attempting to show general quantum advantage, here, we focus on a novel real-world application of D-Wave in wireless networking—more specifically, the scheduling of the activation of the air-links for maximum throughput subject to interference avoidance near network nodes. In addition, D-Wave implementation is made error insensitive by a novel Hamiltonian extra penalty weight adjustment that enlarges the gap and substantially reduces the occurrence of interference violations resulting from inevitable spin bias and coupling errors. The major result of this paper is that quantum annealing benefits more than simulated annealing from this gap expansion process, both in terms of ST99 speedup and network queue occupancy. It is the hope that this could become a real-word application niche where potential benefits of quantum annealing could be objectively assessed.
Amoeba-Inspired Heuristic Search Dynamics for Exploring Chemical Reaction Paths.
Aono, Masashi; Wakabayashi, Masamitsu
2015-09-01
We propose a nature-inspired model for simulating chemical reactions in a computationally resource-saving manner. The model was developed by extending our previously proposed heuristic search algorithm, called "AmoebaSAT [Aono et al. 2013]," which was inspired by the spatiotemporal dynamics of a single-celled amoeboid organism that exhibits sophisticated computing capabilities in adapting to its environment efficiently [Zhu et al. 2013]. AmoebaSAT is used for solving an NP-complete combinatorial optimization problem [Garey and Johnson 1979], "the satisfiability problem," and finds a constraint-satisfying solution at a speed that is dramatically faster than one of the conventionally known fastest stochastic local search methods [Iwama and Tamaki 2004] for a class of randomly generated problem instances [ http://www.cs.ubc.ca/~hoos/5/benchm.html ]. In cases where the problem has more than one solution, AmoebaSAT exhibits dynamic transition behavior among a variety of the solutions. Inheriting these features of AmoebaSAT, we formulate "AmoebaChem," which explores a variety of metastable molecules in which several constraints determined by input atoms are satisfied and generates dynamic transition processes among the metastable molecules. AmoebaChem and its developed forms will be applied to the study of the origins of life, to discover reaction paths for which expected or unexpected organic compounds may be formed via unknown unstable intermediates and to estimate the likelihood of each of the discovered paths.
Evolution, learning, and cognition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Y.C.
1988-01-01
The book comprises more than fifteen articles in the areas of neural networks and connectionist systems, classifier systems, adaptive network systems, genetic algorithm, cellular automata, artificial immune systems, evolutionary genetics, cognitive science, optical computing, combinatorial optimization, and cybernetics.
Development of New Sensing Materials Using Combinatorial and High-Throughput Experimentation
NASA Astrophysics Data System (ADS)
Potyrailo, Radislav A.; Mirsky, Vladimir M.
New sensors with improved performance characteristics are needed for applications as diverse as bedside continuous monitoring, tracking of environmental pollutants, monitoring of food and water quality, monitoring of chemical processes, and safety in industrial, consumer, and automotive settings. Typical requirements in sensor improvement are selectivity, long-term stability, sensitivity, response time, reversibility, and reproducibility. Design of new sensing materials is the important cornerstone in the effort to develop new sensors. Often, sensing materials are too complex to predict their performance quantitatively in the design stage. Thus, combinatorial and high-throughput experimentation methodologies provide an opportunity to generate new required data to discover new sensing materials and/or to optimize existing material compositions. The goal of this chapter is to provide an overview of the key concepts of experimental development of sensing materials using combinatorial and high-throughput experimentation tools, and to promote additional fruitful interactions between computational scientists and experimentalists.
Using Online Algorithms to Solve NP-Hard Problems More Efficiently in Practice
2007-12-01
bounds. For the openstacks , TPP, and pipesworld domains, our results were qualitatively different: most instances in these domains were either easy...between our results in these two sets of domains. For most in- stances in the openstacks domain we found no k values that elicited a “yes” answer in
The mechanisms of temporal inference
NASA Technical Reports Server (NTRS)
Fox, B. R.; Green, S. R.
1987-01-01
The properties of a temporal language are determined by its constituent elements: the temporal objects which it can represent, the attributes of those objects, the relationships between them, the axioms which define the default relationships, and the rules which define the statements that can be formulated. The methods of inference which can be applied to a temporal language are derived in part from a small number of axioms which define the meaning of equality and order and how those relationships can be propagated. More complex inferences involve detailed analysis of the stated relationships. Perhaps the most challenging area of temporal inference is reasoning over disjunctive temporal constraints. Simple forms of disjunction do not sufficiently increase the expressive power of a language while unrestricted use of disjunction makes the analysis NP-hard. In many cases a set of disjunctive constraints can be converted to disjunctive normal form and familiar methods of inference can be applied to the conjunctive sub-expressions. This process itself is NP-hard but it is made more tractable by careful expansion of a tree-structured search space.
A combinatorial approach to the design of vaccines.
Martínez, Luis; Milanič, Martin; Legarreta, Leire; Medvedev, Paul; Malaina, Iker; de la Fuente, Ildefonso M
2015-05-01
We present two new problems of combinatorial optimization and discuss their applications to the computational design of vaccines. In the shortest λ-superstring problem, given a family S1,...,S(k) of strings over a finite alphabet, a set Τ of "target" strings over that alphabet, and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ target strings as substrings of S(i). In the shortest λ-cover superstring problem, given a collection X1,...,X(n) of finite sets of strings over a finite alphabet and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ elements of X(i) as substrings. The two problems are polynomially equivalent, and the shortest λ-cover superstring problem is a common generalization of two well known combinatorial optimization problems, the shortest common superstring problem and the set cover problem. We present two approaches to obtain exact or approximate solutions to the shortest λ-superstring and λ-cover superstring problems: one based on integer programming, and a hill-climbing algorithm. An application is given to the computational design of vaccines and the algorithms are applied to experimental data taken from patients infected by H5N1 and HIV-1.
Integration of capillary electrophoresis with gold nanoparticle-based colorimetry.
Li, Tong; Wu, Zhenglong; Qin, Weidong
2017-12-01
A method integrating capillary electrophoresis (CE) and gold nanoparticle aggregation-based colorimetry (AuNP-ABC) was described. By using a dual-sheath interface, the running buffer was isolated from the colorimetric reaction solution so that CE and AuNP-ABC would not interfere with each other. The proof-of-concept was validated by assay of polyamidoamine (PAMAM) dendrimers that were fortified in human urine samples. The factors influencing the CE-AuNP-ABC performances were investigated and optimized. Under the optimal conditions, the dendrimers were separated within 8 min, with detection limits of 0.5, 1.2 and 2.6 μg mL -1 for PAMAM G1.0, G2.0 and G3.0, respectively. The sensitivity of CE-AuNP-ABC was comparable to or even better than those of liquid chromatography-fluorimetry and liquid chromatography-mass spectrometry. The results suggested that the proposed strategy can be applied to facile and quick determination of analytes of similar properties in complex matrices. Copyright © 2017 Elsevier B.V. All rights reserved.
Onoda, Atsuto; Takeda, Ken; Umezawa, Masakazu
2018-09-01
Recent cohort studies have revealed that perinatal exposure to particulate air pollution, including carbon-based nanoparticles, increases the risk of brain disorders. Although developmental neurotoxicity is currently a major issue in the toxicology of nanoparticles, critical information for understanding the mechanisms underlying the developmental neurotoxicity of airway exposure to carbon black nanoparticle (CB-NP) is still lacking. In order to investigate these mechanisms, we comprehensively analyzed fluctuations in the gene expression profile of the frontal cortex of offspring mice exposed maternally to CB-NP, using microarray analysis combined with Gene Ontology information. We also analyzed differences in the enriched function of genes dysregulated by maternal CB-NP exposure with and without ascorbic acid pretreatment to refine specific alterations in gene expression induced by CB-NP. Total of 652 and 775 genes were dysregulated by CB-NP in the frontal cortex of 6- and 12-week-old offspring mice, respectively. Among the genes dysregulated by CB-NP, those related to extracellular matrix structural constituent, cellular response to interferon-beta, muscle organ development, and cysteine-type endopeptidase inhibitor activity were ameliorated by ascorbic acid pretreatment. A large proportion of the dysregulated genes, categorized in hemostasis, growth factor, chemotaxis, cell proliferation, blood vessel, and dopaminergic neurotransmission, were, however, not ameliorated by ascorbic acid pretreatment. The lack of effects of ascorbic acid on the dysregulation of genes following maternal CB-NP exposure suggests that the contribution of oxidative stress to the effects of CB-NP on these biological functions, i.e., cell migration and proliferation, blood vessel maintenance, and dopaminergic neuron system, may be limited. At least, ascorbic acid pretreatment is hardly likely to be able to protect the brain of offspring from developmental neurotoxicity of CB-NP. The present study provides insight into the mechanisms underlying developmental neurotoxicity following maternal nanoparticle exposure. Copyright © 2018 Elsevier B.V. All rights reserved.
1997-01-01
create a dependency tree containing an optimum set of n-1 first-order dependencies. To do this, first, we select an arbitrary bit Xroot to place at the...the root to an arbitrary bit Xroot -For all other bits Xi, set bestMatchingBitInTree[Xi] to Xroot . -While not all bits have been
Interference Aware Routing Using Spatial Reuse in Wireless Sensor Networks
2013-12-01
practice there is no optimal STDMA algorithm due to the computational complexity of the STDMA implementation; therefore, the common approach is to...Applications, Springer Berlin Heidelberg, pp. 653–657, 2001. [26] B. Korte and J. Vygen, “Shortest Paths,” Combinatorial Optimization Theory and...NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited INTERFERENCE
NASA Astrophysics Data System (ADS)
Rahman, P. A.
2018-05-01
This scientific paper deals with the model of the knapsack optimization problem and method of its solving based on directed combinatorial search in the boolean space. The offered by the author specialized mathematical model of decomposition of the search-zone to the separate search-spheres and the algorithm of distribution of the search-spheres to the different cores of the multi-core processor are also discussed. The paper also provides an example of decomposition of the search-zone to the several search-spheres and distribution of the search-spheres to the different cores of the quad-core processor. Finally, an offered by the author formula for estimation of the theoretical maximum of the computational acceleration, which can be achieved due to the parallelization of the search-zone to the search-spheres on the unlimited number of the processor cores, is also given.
j5 DNA assembly design automation.
Hillson, Nathan J
2014-01-01
Modern standardized methodologies, described in detail in the previous chapters of this book, have enabled the software-automated design of optimized DNA construction protocols. This chapter describes how to design (combinatorial) scar-less DNA assembly protocols using the web-based software j5. j5 assists biomedical and biotechnological researchers construct DNA by automating the design of optimized protocols for flanking homology sequence as well as type IIS endonuclease-mediated DNA assembly methodologies. Unlike any other software tool available today, j5 designs scar-less combinatorial DNA assembly protocols, performs a cost-benefit analysis to identify which portions of an assembly process would be less expensive to outsource to a DNA synthesis service provider, and designs hierarchical DNA assembly strategies to mitigate anticipated poor assembly junction sequence performance. Software integrated with j5 add significant value to the j5 design process through graphical user-interface enhancement and downstream liquid-handling robotic laboratory automation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernal, Andrés; Patiny, Luc; Castillo, Andrés M.
2015-02-21
Nuclear magnetic resonance (NMR) assignment of small molecules is presented as a typical example of a combinatorial optimization problem in chemical physics. Three strategies that help improve the efficiency of solution search by the branch and bound method are presented: 1. reduction of the size of the solution space by resort to a condensed structure formula, wherein symmetric nuclei are grouped together; 2. partitioning of the solution space based on symmetry, that becomes the basis for an efficient branching procedure; and 3. a criterion of selection of input restrictions that leads to increased gaps between branches and thus faster pruningmore » of non-viable solutions. Although the examples chosen to illustrate this work focus on small-molecule NMR assignment, the results are generic and might help solving other combinatorial optimization problems.« less
The complexity of divisibility.
Bausch, Johannes; Cubitt, Toby
2016-09-01
We address two sets of long-standing open questions in linear algebra and probability theory, from a computational complexity perspective: stochastic matrix divisibility, and divisibility and decomposability of probability distributions. We prove that finite divisibility of stochastic matrices is an NP-complete problem, and extend this result to nonnegative matrices, and completely-positive trace-preserving maps, i.e. the quantum analogue of stochastic matrices. We further prove a complexity hierarchy for the divisibility and decomposability of probability distributions, showing that finite distribution divisibility is in P, but decomposability is NP-hard. For the former, we give an explicit polynomial-time algorithm. All results on distributions extend to weak-membership formulations, proving that the complexity of these problems is robust to perturbations.
On Maximizing the Throughput of Packet Transmission under Energy Constraints.
Wu, Weiwei; Dai, Guangli; Li, Yan; Shan, Feng
2018-06-23
More and more Internet of Things (IoT) wireless devices have been providing ubiquitous services over the recent years. Since most of these devices are powered by batteries, a fundamental trade-off to be addressed is the depleted energy and the achieved data throughput in wireless data transmission. By exploiting the rate-adaptive capacities of wireless devices, most existing works on energy-efficient data transmission try to design rate-adaptive transmission policies to maximize the amount of transmitted data bits under the energy constraints of devices. Such solutions, however, cannot apply to scenarios where data packets have respective deadlines and only integrally transmitted data packets contribute. Thus, this paper introduces a notion of weighted throughput, which measures how much total value of data packets are successfully and integrally transmitted before their own deadlines. By designing efficient rate-adaptive transmission policies, this paper aims to make the best use of the energy and maximize the weighted throughput. What is more challenging but with practical significance, we consider the fading effect of wireless channels in both offline and online scenarios. In the offline scenario, we develop an optimal algorithm that computes the optimal solution in pseudo-polynomial time, which is the best possible solution as the problem undertaken is NP-hard. In the online scenario, we propose an efficient heuristic algorithm based on optimal properties derived for the optimal offline solution. Simulation results validate the efficiency of the proposed algorithm.
History of the Army Ground Forces. Study Number 13. Activation and Early Training of ’D’ Division
1948-06-30
CONTENTS,. Page Prefatory Note I - Preactivation Period, 1 `-Act ivation. 10 >Basic Training. 11 The NP Test, 17 ’ýUnit Training, 24 K:,Platoon...of the hard days ahead. "A leader must be fair; he must be interested in his men; above all, he must kaow his job. No men wants to follow a stumbler or...said, "Be sure you donIt shoot me, Joe." Lieutenant 0’Keefe did not have the. best control of his squads as they advanced but he worked hard , dashing
Jiménez-Moreno, Ester; Montalvillo-Jiménez, Laura; Santana, Andrés G; Gómez, Ana M; Jiménez-Osés, Gonzalo; Corzana, Francisco; Bastida, Agatha; Jiménez-Barbero, Jesús; Cañada, Francisco Javier; Gómez-Pinto, Irene; González, Carlos; Asensio, Juan Luis
2016-05-25
Development of strong and selective binders from promiscuous lead compounds represents one of the most expensive and time-consuming tasks in drug discovery. We herein present a novel fragment-based combinatorial strategy for the optimization of multivalent polyamine scaffolds as DNA/RNA ligands. Our protocol provides a quick access to a large variety of regioisomer libraries that can be tested for selective recognition by combining microdialysis assays with simple isotope labeling and NMR experiments. To illustrate our approach, 20 small libraries comprising 100 novel kanamycin-B derivatives have been prepared and evaluated for selective binding to the ribosomal decoding A-Site sequence. Contrary to the common view of NMR as a low-throughput technique, we demonstrate that our NMR methodology represents a valuable alternative for the detection and quantification of complex mixtures, even integrated by highly similar or structurally related derivatives, a common situation in the context of a lead optimization process. Furthermore, this study provides valuable clues about the structural requirements for selective A-site recognition.
Mondal, Milon; Radeva, Nedyalka; Fanlo-Virgós, Hugo; Otto, Sijbren; Klebe, Gerhard; Hirsch, Anna K H
2016-08-01
Fragment-based drug design (FBDD) affords active compounds for biological targets. While there are numerous reports on FBDD by fragment growing/optimization, fragment linking has rarely been reported. Dynamic combinatorial chemistry (DCC) has become a powerful hit-identification strategy for biological targets. We report the synergistic combination of fragment linking and DCC to identify inhibitors of the aspartic protease endothiapepsin. Based on X-ray crystal structures of endothiapepsin in complex with fragments, we designed a library of bis-acylhydrazones and used DCC to identify potent inhibitors. The most potent inhibitor exhibits an IC50 value of 54 nm, which represents a 240-fold improvement in potency compared to the parent hits. Subsequent X-ray crystallography validated the predicted binding mode, thus demonstrating the efficiency of the combination of fragment linking and DCC as a hit-identification strategy. This approach could be applied to a range of biological targets, and holds the potential to facilitate hit-to-lead optimization. © 2016 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Smolensky, Paul; Goldrick, Matthew; Mathis, Donald
2014-08-01
Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance. Copyright © 2013 Cognitive Science Society, Inc.
Design of focused and restrained subsets from extremely large virtual libraries.
Jamois, Eric A; Lin, Chien T; Waldman, Marvin
2003-11-01
With the current and ever-growing offering of reagents along with the vast palette of organic reactions, virtual libraries accessible to combinatorial chemists can reach sizes of billions of compounds or more. Extracting practical size subsets for experimentation has remained an essential step in the design of combinatorial libraries. A typical approach to computational library design involves enumeration of structures and properties for the entire virtual library, which may be unpractical for such large libraries. This study describes a new approach termed as on the fly optimization (OTFO) where descriptors are computed as needed within the subset optimization cycle and without intermediate enumeration of structures. Results reported herein highlight the advantages of coupling an ultra-fast descriptor calculation engine to subset optimization capabilities. We also show that enumeration of properties for the entire virtual library may not only be unpractical but also wasteful. Successful design of focused and restrained subsets can be achieved while sampling only a small fraction of the virtual library. We also investigate the stability of the method and compare results obtained from simulated annealing (SA) and genetic algorithms (GA).
Ömeroğlu, Seçil; Murdoch, Fadime Kara; Sanin, F Dilek
2015-01-01
Nonylphenol ethoxylates (NPEOs) have drawn significant attention within the last decade for both scientific and legislative reasons. In Turkey, the Regulation Regarding the Use of Domestic and Urban Sludges on Land states a limit value for the sum of nonylphenol (NP), nonylphenol monoethoxylate (NP1EO) and nonylphenol diethoxylate (NP2EO) as NPE (NPE=NP+NP1EO+NP2EO). Unfortunately a standard method for the determination of these chemicals has not been yet set by the authorities and no data exists about the concentrations of NP and NPEOs in sewage sludge in Turkey. The aim of this study is to propose simple and easily applicable extraction and measurement techniques for 4-n-nonylphenol (4-n-NP), NP, NP1EO and NP2EO in sewage sludge samples and investigate the year round concentrations in a Metropolitan Wastewater Treatment Plant (WWTP) in Turkey. Different extraction techniques and GC/MS methods for sewage sludge were tested. The best extraction method for these compounds was found to be ultrasonication (5 min) using acetone as the solvent with acceptable recovery of analytes suggested by USEPA and other studies. The optimized extraction method showed good repeatability with relative standard deviations (RSDs) less than 6%. The recovery of analytes were within acceptable limits suggested by USEPA and other studies. The limits of detection (LODs) were 6 µg kg(-1) for NP and NP1EO, 12 µg kg(-1) for NP2EO and 0.03 µg kg(-1) for 4-n-NP. The developed method was applied to sewage sludge samples obtained from the Central WWTP in Ankara, Turkey. The sum NPE (NP+NP1EO+NP2EO) was found to be in between 5.5 µg kg(-1) and 19.5 µg kg(-1), values which are in compliance with Turkish and European regulations. Copyright © 2014 Elsevier B.V. All rights reserved.
Radiochemical determination of 237NP in soil samples contaminated with weapon grade plutonium
NASA Astrophysics Data System (ADS)
Antón, M. P.; Espinosa, A.; Aragón, A.
2006-01-01
The Palomares terrestrial ecosystem (Spain) constitutes a natural laboratory to study transuranics. This scenario is partially contaminated with weapon-grade plutonium since the burnout and fragmentation of two thermonuclear bombs accidentally dropped in 1966. While performing radiometric measurements in the field, the possible presence of 237Np was observed through its 29 keV gamma emission. To accomplish a detailed characterization of the source term in the contaminated area using the isotopic ratios Pu-Am-Np, the radiochemical isolation and quantification by alpha spectrometry of 237Np was initiated. The selected radiochemical procedure involves separation of Np from Am, U and Pu with ionic resins, given that in soil samples from Palomares 239+240Pu levels are several orders of magnitude higher than 237Np. Then neptunium is isolated using TEVA organic resins. After electrodeposition, quantification is performed by alpha spectrometry. Different tests were done with blank solutions spiked with 236Pu and 237Np, solutions resulting from the total dissolution of radioactive particles and soil samples. Results indicate that the optimal sequential radionuclide separation order is Pu-Np, with decontamination percentages obtained with the ionic resins ranging from 98% to 100%. Also, the addition of NaNO2 has proved to be necessary, acting as a stabilizer of Pu-Np valences.
Manipulating Tabu List to Handle Machine Breakdowns in Job Shop Scheduling Problems
NASA Astrophysics Data System (ADS)
Nababan, Erna Budhiarti; SalimSitompul, Opim
2011-06-01
Machine breakdowns in a production schedule may occur on a random basis that make the well-known hard combinatorial problem of Job Shop Scheduling Problems (JSSP) becomes more complex. One of popular techniques used to solve the combinatorial problems is Tabu Search. In this technique, moves that will be not allowed to be revisited are retained in a tabu list in order to avoid in gaining solutions that have been obtained previously. In this paper, we propose an algorithm to employ a second tabu list to keep broken machines, in addition to the tabu list that keeps the moves. The period of how long the broken machines will be kept on the list is categorized using fuzzy membership function. Our technique are tested to the benchmark data of JSSP available on the OR library. From the experiment, we found that our algorithm is promising to help a decision maker to face the event of machine breakdowns.
Understanding the Kinetics of Protein-Nanoparticle Corona Formation.
Vilanova, Oriol; Mittag, Judith J; Kelly, Philip M; Milani, Silvia; Dawson, Kenneth A; Rädler, Joachim O; Franzese, Giancarlo
2016-12-27
When a pristine nanoparticle (NP) encounters a biological fluid, biomolecules spontaneously form adsorption layers around the NP, called "protein corona". The corona composition depends on the time-dependent environmental conditions and determines the NP's fate within living organisms. Understanding how the corona evolves is fundamental in nanotoxicology as well as medical applications. However, the process of corona formation is challenging due to the large number of molecules involved and to the large span of relevant time scales ranging from 100 μs, hard to probe in experiments, to hours, out of reach of all-atoms simulations. Here we combine experiments, simulations, and theory to study (i) the corona kinetics (over 10 -3 -10 3 s) and (ii) its final composition for silica NPs in a model plasma made of three blood proteins (human serum albumin, transferrin, and fibrinogen). When computer simulations are calibrated by experimental protein-NP binding affinities measured in single-protein solutions, the theoretical model correctly reproduces competitive protein replacement as proven by independent experiments. When we change the order of administration of the three proteins, we observe a memory effect in the final corona composition that we can explain within our model. Our combined experimental and computational approach is a step toward the development of systematic prediction and control of protein-NP corona composition based on a hierarchy of equilibrium protein binding constants.
Azizi, Ebrahim; Namazi, Alireza; Haririan, Ismaeil; Fouladdel, Shamileh; Khoshayand, Mohammad R; Shotorbani, Parisa Y; Nomani, Alireza; Gazori, Taraneh
2010-01-01
Chitosan/alginate nanoparticles which had been optimized in our previous study using two different N/P ratios were chosen and their ability to release epidermal growth factor receptor (EGFR) antisense was investigated. In addition, the stability of these nanoparticles in aqueous medium and after freeze-drying was investigated. In the case of both N/P ratios (5, 25), nanoparticles started releasing EGFR antisense as soon as they were exposed to the medium and the release lasted for approximately 50 hours. Nanoparticle size, shape, zeta potential, and release profile did not show any significant change after the freeze-drying process (followed by reswelling). The nanoparticles were reswellable again after freeze-drying in phosphate buffer with a pH of 7.4 over a period of six hours. Agarose gel electrophoresis of the nanoparticles with the two different N/P ratios showed that these nanoparticles could protect EGFR antisense molecules for six hours. PMID:20957167
Modified kinetics of enzymes interacting with nanoparticles
NASA Astrophysics Data System (ADS)
Díaz, Sebastián. A.; Breger, Joyce C.; Malanoski, Anthony; Claussen, Jonathan C.; Walper, Scott A.; Ancona, Mario G.; Brown, Carl W.; Stewart, Michael H.; Oh, Eunkeu; Susumu, Kimihiro; Medintz, Igor L.
2015-08-01
Enzymes are important players in multiple applications, be it bioremediation, biosynthesis, or as reporters. The business of catalysis and inhibition of enzymes is a multibillion dollar industry and understanding the kinetics of commercial enzymes can have a large impact on how these systems are optimized. Recent advances in nanotechnology have opened up the field of nanoparticle (NP) and enzyme conjugates and two principal architectures for NP conjugate systems have been developed. In the first example the enzyme is bound to the NP in a persistent manner, here we find that key factors such as directed enzyme conjugation allow for enhanced kinetics. Through controlled comparative experiments we begin to tease out specific mechanisms that may account for the enhancement. The second system is based on dynamic interactions of the enzymes with the NP. The enzyme substrate is bound to the NP and the enzyme is free in solution. Here again we find that there are many variables , such as substrate positioning and NP selection, that modify the kinetics.
Single product lot-sizing on unrelated parallel machines with non-decreasing processing times
NASA Astrophysics Data System (ADS)
Eremeev, A.; Kovalyov, M.; Kuznetsov, P.
2018-01-01
We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.
Capacity planning of a wide-sense nonblocking generalized survivable network
NASA Astrophysics Data System (ADS)
Ho, Kwok Shing; Cheung, Kwok Wai
2006-06-01
Generalized survivable networks (GSNs) have two interesting properties that are essential attributes for future backbone networks--full survivability against link failures and support for dynamic traffic demands. GSNs incorporate the nonblocking network concept into the survivable network models. Given a set of nodes and a topology that is at least two-edge connected, a certain minimum capacity is required for each edge to form a GSN. The edge capacity is bounded because each node has an input-output capacity limit that serves as a constraint for any allowable traffic demand matrix. The GSN capacity planning problem is nondeterministic polynomial time (NP) hard. We first give a rigorous mathematical framework; then we offer two different solution approaches. The two-phase approach is fast, but the joint optimization approach yields a better bound. We carried out numerical computations for eight networks with different topologies and found that the cost of a GSN is only a fraction (from 52% to 89%) more than that of a static survivable network.
An approximation algorithm for the Noah's Ark problem with random feature loss.
Hickey, Glenn; Blanchette, Mathieu; Carmi, Paz; Maheshwari, Anil; Zeh, Norbert
2011-01-01
The phylogenetic diversity (PD) of a set of species is a measure of their evolutionary distinctness based on a phylogenetic tree. PD is increasingly being adopted as an index of biodiversity in ecological conservation projects. The Noah's Ark Problem (NAP) is an NP-Hard optimization problem that abstracts a fundamental conservation challenge in asking to maximize the expected PD of a set of taxa given a fixed budget, where each taxon is associated with a cost of conservation and a probability of extinction. Only simplified instances of the problem, where one or more parameters are fixed as constants, have as of yet been addressed in the literature. Furthermore, it has been argued that PD is not an appropriate metric for models that allow information to be lost along paths in the tree. We therefore generalize the NAP to incorporate a proposed model of feature loss according to an exponential distribution and term this problem NAP with Loss (NAPL). In this paper, we present a pseudopolynomial time approximation scheme for NAPL.
An Exact Algorithm to Compute the Double-Cut-and-Join Distance for Genomes with Duplicate Genes.
Shao, Mingfu; Lin, Yu; Moret, Bernard M E
2015-05-01
Computing the edit distance between two genomes is a basic problem in the study of genome evolution. The double-cut-and-join (DCJ) model has formed the basis for most algorithmic research on rearrangements over the last few years. The edit distance under the DCJ model can be computed in linear time for genomes without duplicate genes, while the problem becomes NP-hard in the presence of duplicate genes. In this article, we propose an integer linear programming (ILP) formulation to compute the DCJ distance between two genomes with duplicate genes. We also provide an efficient preprocessing approach to simplify the ILP formulation while preserving optimality. Comparison on simulated genomes demonstrates that our method outperforms MSOAR in computing the edit distance, especially when the genomes contain long duplicated segments. We also apply our method to assign orthologous gene pairs among human, mouse, and rat genomes, where once again our method outperforms MSOAR.
TemperSAT: A new efficient fair-sampling random k-SAT solver
NASA Astrophysics Data System (ADS)
Fang, Chao; Zhu, Zheng; Katzgraber, Helmut G.
The set membership problem is of great importance to many applications and, in particular, database searches for target groups. Recently, an approach to speed up set membership searches based on the NP-hard constraint-satisfaction problem (random k-SAT) has been developed. However, the bottleneck of the approach lies in finding the solution to a large SAT formula efficiently and, in particular, a large number of independent solutions is needed to reduce the probability of false positives. Unfortunately, traditional random k-SAT solvers such as WalkSAT are biased when seeking solutions to the Boolean formulas. By porting parallel tempering Monte Carlo to the sampling of binary optimization problems, we introduce a new algorithm (TemperSAT) whose performance is comparable to current state-of-the-art SAT solvers for large k with the added benefit that theoretically it can find many independent solutions quickly. We illustrate our results by comparing to the currently fastest implementation of WalkSAT, WalkSATlm.
NASA Astrophysics Data System (ADS)
Mirabi, Mohammad; Fatemi Ghomi, S. M. T.; Jolai, F.
2014-04-01
Flow-shop scheduling problem (FSP) deals with the scheduling of a set of n jobs that visit a set of m machines in the same order. As the FSP is NP-hard, there is no efficient algorithm to reach the optimal solution of the problem. To minimize the holding, delay and setup costs of large permutation flow-shop scheduling problems with sequence-dependent setup times on each machine, this paper develops a novel hybrid genetic algorithm (HGA) with three genetic operators. Proposed HGA applies a modified approach to generate a pool of initial solutions, and also uses an improved heuristic called the iterated swap procedure to improve the initial solutions. We consider the make-to-order production approach that some sequences between jobs are assumed as tabu based on maximum allowable setup cost. In addition, the results are compared to some recently developed heuristics and computational experimental results show that the proposed HGA performs very competitively with respect to accuracy and efficiency of solution.
NASA Astrophysics Data System (ADS)
Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter
2018-05-01
This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.
Optimal placement of tuning masses on truss structures by genetic algorithms
NASA Technical Reports Server (NTRS)
Ponslet, Eric; Haftka, Raphael T.; Cudney, Harley H.
1993-01-01
Optimal placement of tuning masses, actuators and other peripherals on large space structures is a combinatorial optimization problem. This paper surveys several techniques for solving this problem. The genetic algorithm approach to the solution of the placement problem is described in detail. An example of minimizing the difference between the two lowest frequencies of a laboratory truss by adding tuning masses is used for demonstrating some of the advantages of genetic algorithms. The relative efficiencies of different codings are compared using the results of a large number of optimization runs.
Performance evaluation of coherent Ising machines against classical neural networks
NASA Astrophysics Data System (ADS)
Haribara, Yoshitaka; Ishikawa, Hitoshi; Utsunomiya, Shoko; Aihara, Kazuyuki; Yamamoto, Yoshihisa
2017-12-01
The coherent Ising machine is expected to find a near-optimal solution in various combinatorial optimization problems, which has been experimentally confirmed with optical parametric oscillators and a field programmable gate array circuit. The similar mathematical models were proposed three decades ago by Hopfield et al in the context of classical neural networks. In this article, we compare the computational performance of both models.
Lessel, Uta; Wellenzohn, Bernd; Fischer, J Robert; Rarey, Matthias
2012-02-27
A case study is presented illustrating the design of a focused CDK2 library. The scaffold of the library was detected by a feature trees search in a fragment space based on reactions from combinatorial chemistry. For the design the software LoFT (Library optimizer using Feature Trees) was used. The special feature called FTMatch was applied to restrict the parts of the queries where the reagents are permitted to match. This way a 3D scoring function could be simulated. Results were compared with alternative designs by GOLD docking and ROCS 3D alignments.
Silva, A L; Rosalia, R A; Sazak, A; Carstens, M G; Ossendorp, F; Oostendorp, J; Jiskoot, W
2013-04-01
Overlapping synthetic long peptides (SLPs) hold great promise for immunotherapy of cancer. Poly(lactic-co-glycolic acid) (PLGA) nanoparticles (NPs) are being developed as delivery systems to improve the potency of peptide-based therapeutic cancer vaccines. Our aim was to optimize PLGA NP for SLP delivery with respect to encapsulation and release, using OVA24, a 24-residue long synthetic antigenic peptide covering a CTL epitope of ovalbumin (SIINFEKL), as a model antigen. Peptide-loaded PLGA NPs were prepared by a double emulsion/solvent evaporation technique. Using standard conditions (acidic inner aqueous phase), we observed that either encapsulation was very low (1-30%), or burst release extremely high (>70%) upon resuspension of NP in physiological buffers. By adjusting formulation and process parameters, we uncovered that the pH of the first emulsion was critical to efficient encapsulation and controlled release. In particular, an alkaline inner aqueous phase resulted in circa 330 nm sized NP with approximately 40% encapsulation efficiency and low (<10%) burst release. These NP showed enhanced MHC class I restricted T cell activation in vitro when compared to high-burst releasing NP and soluble OVA24, proving that efficient entrapment of the antigen is crucial to induce a potent cellular immune response. Copyright © 2012 Elsevier B.V. All rights reserved.
Chang, Bea-Ven; Chang, Yi-Ming
2016-04-01
The toxic chemicals bisphenol A (BPA), bisphenol F (BPF), nonylphenol (NP), and tetrabromobisphenol A (TBBPA) are endocrine-disrupting chemicals that have consequently drawn much concern regarding their effect on the environment. The objectives of this study were to investigate the degradation of BPA, BPF, NP, and TBBPA by enzymes from Pleurotus eryngii in submerged fermentation (SmF) and solid-state fermentation (SSF), and also to assess the removal of toxic chemicals in spent mushroom compost (SMC). BPA and BPF were analyzed by high-performance liquid chromatography; NP and TBBPA were analyzed by gas chromatography. NP degradation was enhanced by adding CuSO4 (1 mM), MnSO4 (0.5 mM), gallic acid (1 mM), tartaric acid (20 mM), citric acid (20 mM), guaiacol (1 mM), or 2,2'-azino-bis- (3-ethylbenzothiazoline-6-sulfonic acid; 1 mM), with the last yielding a higher NP degradation rate than the other additives from SmF. The optimal conditions for enzyme activity from SSF were a sawdust/wheat bran ratio of 1:4 and a moisture content of 5 mL/g. The enzyme activities were higher with sawdust/wheat bran than with sawdust/rice bran. The optimal conditions for the extraction of enzyme from SMC required using sodium acetate buffer (pH 5.0, solid/solution ratio 1:5), and extraction over 3 hours. The removal rates of toxic chemicals by P. eryngii, in descending order of magnitude, were SSF > SmF > SMC. The removal rates were BPF > BPA > NP > TBBPA. Copyright © 2014. Published by Elsevier B.V.
Decorated Heegaard Diagrams and Combinatorial Heegaard Floer Homology
NASA Astrophysics Data System (ADS)
Hammarsten, Carl
Heegaard Floer homology is a collection of invariants for closed oriented three-manifolds, introduced by Ozsvath and Szabo in 2001. The simplest version is defined as the homology of a chain complex coming from a Heegaard diagram of the three manifold. In the original definition, the differentials count the number of points in certain moduli spaces of holomorphic disks, which are hard to compute in general. More recently, Sarkar and Wang (2006) and Ozsvath, Stipsicz and Szabo, (2009) have determined combinatorial methods for computing this homology with Z2 coefficients. Both methods rely on the construction of very specific Heegaard diagrams for the manifold, which are generally very complicated. Given a decorated Heegaard diagram H for a closed oriented 3-manifold Y, that is a Heegaard diagram together with a collection of embedded paths satisfying certain criteria, we describe a combinatorial recipe for a chain complex CF'[special character omitted]( H). If H satisfies some technical constraints we show that this chain complex is homotopically equivalent to the Heegaard Floer chain complex CF[special character omitted](H) and hence has the Heegaard Floer homology HF[special character omitted](Y) as its homology groups. Using branched spines we give an algorithm to construct a decorated Heegaard diagram which satisfies the necessary technical constraints for every closed oriented Y. We present this diagram graphically in the form of a strip diagram.
NASA Astrophysics Data System (ADS)
Yan, Zongkai; Zhang, Xiaokun; Li, Guang; Cui, Yuxing; Jiang, Zhaolian; Liu, Wen; Peng, Zhi; Xiang, Yong
2018-01-01
The conventional methods for designing and preparing thin film based on wet process remain a challenge due to disadvantages such as time-consuming and ineffective, which hinders the development of novel materials. Herein, we present a high-throughput combinatorial technique for continuous thin film preparation relied on chemical bath deposition (CBD). The method is ideally used to prepare high-throughput combinatorial material library with low decomposition temperatures and high water- or oxygen-sensitivity at relatively high-temperature. To check this system, a Cu(In, Ga)Se (CIGS) thin films library doped with 0-19.04 at.% of antimony (Sb) was taken as an example to evaluate the regulation of varying Sb doping concentration on the grain growth, structure, morphology and electrical properties of CIGS thin film systemically. Combined with the Energy Dispersive Spectrometer (EDS), X-ray Photoelectron Spectroscopy (XPS), automated X-ray Diffraction (XRD) for rapid screening and Localized Electrochemical Impedance Spectroscopy (LEIS), it was confirmed that this combinatorial high-throughput system could be used to identify the composition with the optimal grain orientation growth, microstructure and electrical properties systematically, through accurately monitoring the doping content and material composition. According to the characterization results, a Sb2Se3 quasi-liquid phase promoted CIGS film-growth model has been put forward. In addition to CIGS thin film reported here, the combinatorial CBD also could be applied to the high-throughput screening of other sulfide thin film material systems.
New physics with the lepton flavor violating decay τ →3 μ
NASA Astrophysics Data System (ADS)
Calcuttawala, Zaineb; Kundu, Anirban; Nandi, Soumitra; Patra, Sunando Kumar
2018-05-01
Lepton flavor violating (LFV) processes are a smoking gun signal of new physics (NP). If the semileptonic B decay anomalies are indeed due to some NP, such operators can potentially lead to LFV decays involving the second and the third generation leptons, like τ →3 μ . In this paper, we explore how far the nature of NP can be unraveled at the next generation B -factories like Belle-II, provided the decay τ →3 μ has been observed. We use four observables with which the differentiation among NP operators may be achieved to a high confidence level. Possible presence of multiple NP operators are also analyzed with the optimal observable technique. While the analysis can be improved even further if the final state muon polarizations are measured, we present this work as a motivational tool for the experimentalists, as well as a template for the analysis of similar processes.
An Introduction to Simulated Annealing
ERIC Educational Resources Information Center
Albright, Brian
2007-01-01
An attempt to model the physical process of annealing lead to the development of a type of combinatorial optimization algorithm that takes on the problem of getting trapped in a local minimum. The author presents a Microsoft Excel spreadsheet that illustrates how this works.
Chemical Compound Design Using Nuclear Charge Distributions
2012-03-01
Finding optimal solutions to design problems in chemistry is hampered by the combinatorially large search space. We develop a general theoretical ... framework for finding chemical compounds with prescribed properties using nuclear charge distributions. The key is the reformulation of the design
NASA Astrophysics Data System (ADS)
Tang, Dunbing; Dai, Min
2015-09-01
The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.
Stability of Solutions to Classes of Traveling Salesman Problems.
Niendorf, Moritz; Kabamba, Pierre T; Girard, Anouck R
2016-04-01
By performing stability analysis on an optimal tour for problems belonging to classes of the traveling salesman problem (TSP), this paper derives margins of optimality for a solution with respect to disturbances in the problem data. Specifically, we consider the asymmetric sequence-dependent TSP, where the sequence dependence is driven by the dynamics of a stack. This is a generalization of the symmetric non sequence-dependent version of the TSP. Furthermore, we also consider the symmetric sequence-dependent variant and the asymmetric non sequence-dependent variant. Amongst others these problems have applications in logistics and unmanned aircraft mission planning. Changing external conditions such as traffic or weather may alter task costs, which can render an initially optimal itinerary suboptimal. Instead of optimizing the itinerary every time task costs change, stability criteria allow for fast evaluation of whether itineraries remain optimal. This paper develops a method to compute stability regions for the best tour in a set of tours for the symmetric TSP and extends the results to the asymmetric problem as well as their sequence-dependent counterparts. As the TSP is NP-hard, heuristic methods are frequently used to solve it. The presented approach is also applicable to analyze stability regions for a tour obtained through application of the k -opt heuristic with respect to the k -neighborhood. A dimensionless criticality metric for edges is proposed, such that a high criticality of an edge indicates that the optimal tour is more susceptible to cost changes in that edge. Multiple examples demonstrate the application of the developed stability computation method as well as the edge criticality measure that facilitates an intuitive assessment of instances of the TSP.
Denora, Nunzio; Lopedota, Angela; Perrone, Mara; Laquintana, Valentino; Iacobazzi, Rosa M; Milella, Antonella; Fanizza, Elisabetta; Depalo, Nicoletta; Cutrignelli, Annalisa; Lopalco, Antonio; Franco, Massimo
2016-10-01
This work describes N-acetylcysteine (NAC)- and glutathione (GSH)-glycol chitosan (GC) polymer conjugates engineered as potential platform useful to formulate micro-(MP) and nano-(NP) particles via spray-drying techniques. These conjugates are mucoadhesive over the range of urine pH, 5.0-7.0, which makes them advantageous for intravesical drug delivery and treatment of local bladder diseases. NAC- and GSH-GC conjugates were generated with a synthetic approach optimizing reaction times and purification in order to minimize the oxidation of thiol groups. In this way, the resulting amount of free thiol groups immobilized per gram of NAC- and GSH-GC conjugates was 6.3 and 3.6mmol, respectively. These polymers were completely characterized by molecular weight, surface sulfur content, solubility at different pH values, substitution and swelling degree. Mucoadhesion properties were evaluated in artificial urine by turbidimetric and zeta (ζ)-potential measurements demonstrating good mucoadhesion properties, in particular for NAC-GC at pH 5.0. Starting from the thiolated polymers, MP and NP were prepared using both the Büchi B-191 and Nano Büchi B-90 spray dryers, respectively. The resulting two formulations were evaluated for yield, size, oxidation of thiol groups and ex-vivo mucoadhesion. The new spray drying technique provided NP of suitable size (<1μm) for catheter administration, low degree of oxidation, and sufficient mucoadhesion property with 9% and 18% of GSH- and NAC-GC based NP retained on pig mucosa bladder after 3h of exposure, respectively. The aim of the present study was first to optimize the synthesis of NAC-GC and GSH-GC, and preserve the oxidation state of the thiol moieties by introducing several optimizations of the already reported synthetic procedures that increase the mucoadhesive properties and avoid pH-dependent aggregation. Second, starting from these optimized thiomers, we studied the feasibility of manufacturing MP and NP by spray-drying techniques. The aim of this second step was to produce mucoadhesive drug delivery systems of adequate size for vesical administration by catheter, and comparable mucoadhesive properties with respect to the processed polymers, avoiding thiolic oxidation during the formulation. MP with acceptable size produced by spray-dryer Büchi B-191 were compared with NP made with the apparatus Nano Büchi B-90. Copyright © 2016 Acta Materialia Inc. All rights reserved.
Optimization Strategies for Sensor and Actuator Placement
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; Kincaid, Rex K.
1999-01-01
This paper provides a survey of actuator and sensor placement problems from a wide range of engineering disciplines and a variety of applications. Combinatorial optimization methods are recommended as a means for identifying sets of actuators and sensors that maximize performance. Several sample applications from NASA Langley Research Center, such as active structural acoustic control, are covered in detail. Laboratory and flight tests of these applications indicate that actuator and sensor placement methods are effective and important. Lessons learned in solving these optimization problems can guide future research.
Artificial immune algorithm for multi-depot vehicle scheduling problems
NASA Astrophysics Data System (ADS)
Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling
2008-10-01
In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.
Interactive machine learning for health informatics: when do we need the human-in-the-loop?
Holzinger, Andreas
2016-06-01
Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as "algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human." This "human-in-the-loop" can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.
Characterization of the probabilistic traveling salesman problem.
Bowler, Neill E; Fink, Thomas M A; Ball, Robin C
2003-09-01
We show that stochastic annealing can be successfully applied to gain new results on the probabilistic traveling salesman problem. The probabilistic "traveling salesman" must decide on an a priori order in which to visit n cities (randomly distributed over a unit square) before learning that some cities can be omitted. We find the optimized average length of the pruned tour follows E(L(pruned))=sqrt[np](0.872-0.105p)f(np), where p is the probability of a city needing to be visited, and f(np)-->1 as np--> infinity. The average length of the a priori tour (before omitting any cities) is found to follow E(L(a priori))=sqrt[n/p]beta(p), where beta(p)=1/[1.25-0.82 ln(p)] is measured for 0.05< or =p< or =0.6. Scaling arguments and indirect measurements suggest that beta(p) tends towards a constant for p<0.03. Our stochastic annealing algorithm is based on limited sampling of the pruned tour lengths, exploiting the sampling error to provide the analog of thermal fluctuations in simulated (thermal) annealing. The method has general application to the optimization of functions whose cost to evaluate rises with the precision required.
Ghaedi, M; Ansari, A; Bahari, F; Ghaedi, A M; Vafaei, A
2015-02-25
In the present study, zinc sulfide nanoparticle loaded on activated carbon (ZnS-NP-AC) simply was synthesized in the presence of ultrasound and characterized using different techniques such as SEM and BET analysis. Then, this material was used for brilliant green (BG) removal. To dependency of BG removal percentage toward various parameters including pH, adsorbent dosage, initial dye concentration and contact time were examined and optimized. The mechanism and rate of adsorption was ascertained by analyzing experimental data at various time to conventional kinetic models such as pseudo-first-order and second order, Elovich and intra-particle diffusion models. Comparison according to general criterion such as relative error in adsorption capacity and correlation coefficient confirm the usability of pseudo-second-order kinetic model for explanation of data. The Langmuir models is efficiently can explained the behavior of adsorption system to give full information about interaction of BG with ZnS-NP-AC. A multiple linear regression (MLR) and a hybrid of artificial neural network and partial swarm optimization (ANN-PSO) model were used for prediction of brilliant green adsorption onto ZnS-NP-AC. Comparison of the results obtained using offered models confirm higher ability of ANN model compare to the MLR model for prediction of BG adsorption onto ZnS-NP-AC. Using the optimal ANN-PSO model the coefficient of determination (R(2)) were 0.9610 and 0.9506; mean squared error (MSE) values were 0.0020 and 0.0022 for the training and testing data set, respectively. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ghaedi, M.; Ansari, A.; Bahari, F.; Ghaedi, A. M.; Vafaei, A.
2015-02-01
In the present study, zinc sulfide nanoparticle loaded on activated carbon (ZnS-NP-AC) simply was synthesized in the presence of ultrasound and characterized using different techniques such as SEM and BET analysis. Then, this material was used for brilliant green (BG) removal. To dependency of BG removal percentage toward various parameters including pH, adsorbent dosage, initial dye concentration and contact time were examined and optimized. The mechanism and rate of adsorption was ascertained by analyzing experimental data at various time to conventional kinetic models such as pseudo-first-order and second order, Elovich and intra-particle diffusion models. Comparison according to general criterion such as relative error in adsorption capacity and correlation coefficient confirm the usability of pseudo-second-order kinetic model for explanation of data. The Langmuir models is efficiently can explained the behavior of adsorption system to give full information about interaction of BG with ZnS-NP-AC. A multiple linear regression (MLR) and a hybrid of artificial neural network and partial swarm optimization (ANN-PSO) model were used for prediction of brilliant green adsorption onto ZnS-NP-AC. Comparison of the results obtained using offered models confirm higher ability of ANN model compare to the MLR model for prediction of BG adsorption onto ZnS-NP-AC. Using the optimal ANN-PSO model the coefficient of determination (R2) were 0.9610 and 0.9506; mean squared error (MSE) values were 0.0020 and 0.0022 for the training and testing data set, respectively.
2013-01-01
Background Phylogeny estimation from aligned haplotype sequences has attracted more and more attention in the recent years due to its importance in analysis of many fine-scale genetic data. Its application fields range from medical research, to drug discovery, to epidemiology, to population dynamics. The literature on molecular phylogenetics proposes a number of criteria for selecting a phylogeny from among plausible alternatives. Usually, such criteria can be expressed by means of objective functions, and the phylogenies that optimize them are referred to as optimal. One of the most important estimation criteria is the parsimony which states that the optimal phylogeny T∗for a set H of n haplotype sequences over a common set of variable loci is the one that satisfies the following requirements: (i) it has the shortest length and (ii) it is such that, for each pair of distinct haplotypes hi,hj∈H, the sum of the edge weights belonging to the path from hi to hj in T∗ is not smaller than the observed number of changes between hi and hj. Finding the most parsimonious phylogeny for H involves solving an optimization problem, called the Most Parsimonious Phylogeny Estimation Problem (MPPEP), which is NP-hard in many of its versions. Results In this article we investigate a recent version of the MPPEP that arises when input data consist of single nucleotide polymorphism haplotypes extracted from a population of individuals on a common genomic region. Specifically, we explore the prospects for improving on the implicit enumeration strategy of implicit enumeration strategy used in previous work using a novel problem formulation and a series of strengthening valid inequalities and preliminary symmetry breaking constraints to more precisely bound the solution space and accelerate implicit enumeration of possible optimal phylogenies. We present the basic formulation and then introduce a series of provable valid constraints to reduce the solution space. We then prove that these constraints can often lead to significant reductions in the gap between the optimal solution and its non-integral linear programming bound relative to the prior art as well as often substantially faster processing of moderately hard problem instances. Conclusion We provide an indication of the conditions under which such an optimal enumeration approach is likely to be feasible, suggesting that these strategies are usable for relatively large numbers of taxa, although with stricter limits on numbers of variable sites. The work thus provides methodology suitable for provably optimal solution of some harder instances that resist all prior approaches. PMID:23343437
NASA Astrophysics Data System (ADS)
Islam, Mohammad; Khalid, Yasir; Ahmad, Iftikhar; Almajid, Abdulhakim A.; Achour, Amine; Dunn, Theresa J.; Akram, Aftab; Anwar, Saqib
2018-04-01
Silicon carbide (SiC) nanoparticles (NP) and/or graphene nanoplatelets (GNP) were incorporated into the aluminum matrix through colloidal dispersion and mixing of the powders, followed by consolidation using a high-frequency induction heat sintering process. All the nanocomposite samples exhibited high densification (> 96 pct) with a maximum increase in Vickers microhardness by 92 pct relative to that of pure aluminum. The tribological properties of the samples were determined at the normal frictional forces of 10 and 50 N. At relatively low load of 10 N, the adhesive wear was found to be the predominant wear mechanism, whereas in the case of a 50 N normal load, there was significant contribution from abrasive wear possibly by hard SiC NP. From wear tests, the values for the coefficient of friction (COF) and the normalized wear rate were determined. The improvement in hardness and wear resistance may be attributed to multiple factors, including high relative density, uniform SiC and GNP dispersion in the aluminum matrix, grain refinement through GNP pinning, as well as inhibition of dislocation movement by SiC NP. The nanocomposite sample containing 10 SiC and 0.5 GNP (by wt pct) yielded the maximum wear resistance at 10 N normal load. Microstructural characterization of the nanocomposite surfaces and wear debris was performed using scanning electron microscope (SEM) and transmission electron microscope (TEM). The synergistic effect of the GNP and SiC nanostructures accounts for superior wear resistance in the aluminum matrix nanocomposites.
NASA Astrophysics Data System (ADS)
Islam, Mohammad; Khalid, Yasir; Ahmad, Iftikhar; Almajid, Abdulhakim A.; Achour, Amine; Dunn, Theresa J.; Akram, Aftab; Anwar, Saqib
2018-07-01
Silicon carbide (SiC) nanoparticles (NP) and/or graphene nanoplatelets (GNP) were incorporated into the aluminum matrix through colloidal dispersion and mixing of the powders, followed by consolidation using a high-frequency induction heat sintering process. All the nanocomposite samples exhibited high densification (> 96 pct) with a maximum increase in Vickers microhardness by 92 pct relative to that of pure aluminum. The tribological properties of the samples were determined at the normal frictional forces of 10 and 50 N. At relatively low load of 10 N, the adhesive wear was found to be the predominant wear mechanism, whereas in the case of a 50 N normal load, there was significant contribution from abrasive wear possibly by hard SiC NP. From wear tests, the values for the coefficient of friction (COF) and the normalized wear rate were determined. The improvement in hardness and wear resistance may be attributed to multiple factors, including high relative density, uniform SiC and GNP dispersion in the aluminum matrix, grain refinement through GNP pinning, as well as inhibition of dislocation movement by SiC NP. The nanocomposite sample containing 10 SiC and 0.5 GNP (by wt pct) yielded the maximum wear resistance at 10 N normal load. Microstructural characterization of the nanocomposite surfaces and wear debris was performed using scanning electron microscope (SEM) and transmission electron microscope (TEM). The synergistic effect of the GNP and SiC nanostructures accounts for superior wear resistance in the aluminum matrix nanocomposites.
Comparative modeling of InP solar cell structures
NASA Technical Reports Server (NTRS)
Jain, R. K.; Weinberg, I.; Flood, D. J.
1991-01-01
The comparative modeling of p(+)n and n(+)p indium phosphide solar cell structures is studied using a numerical program PC-1D. The optimal design study has predicted that the p(+)n structure offers improved cell efficiencies as compared to n(+)p structure, due to higher open-circuit voltage. The various cell material and process parameters to achieve the maximum cell efficiencies are reported. The effect of some of the cell parameters on InP cell I-V characteristics was studied. The available radiation resistance data on n(+)p and p(+)p InP solar cells are also critically discussed.
Latimer, Luke N; Dueber, John E
2017-06-01
A common challenge in metabolic engineering is rapidly identifying rate-controlling enzymes in heterologous pathways for subsequent production improvement. We demonstrate a workflow to address this challenge and apply it to improving xylose utilization in Saccharomyces cerevisiae. For eight reactions required for conversion of xylose to ethanol, we screened enzymes for functional expression in S. cerevisiae, followed by a combinatorial expression analysis to achieve pathway flux balancing and identification of limiting enzymatic activities. In the next round of strain engineering, we increased the copy number of these limiting enzymes and again tested the eight-enzyme combinatorial expression library in this new background. This workflow yielded a strain that has a ∼70% increase in biomass yield and ∼240% increase in xylose utilization. Finally, we chromosomally integrated the expression library. This library enriched for strains with multiple integrations of the pathway, which likely were the result of tandem integrations mediated by promoter homology. Biotechnol. Bioeng. 2017;114: 1301-1309. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Ito, Yoichiro; Yamanishi, Mamoru; Ikeuchi, Akinori; Imamura, Chie; Matsuyama, Takashi
2015-01-01
Combinatorial screening used together with a broad library of gene expression cassettes is expected to produce a powerful tool for the optimization of the simultaneous expression of multiple enzymes. Recently, we proposed a highly tunable protein expression system that utilized multiple genome-integrated target genes to fine-tune enzyme expression in yeast cells. This tunable system included a library of expression cassettes each composed of three gene-expression control elements that in different combinations produced a wide range of protein expression levels. In this study, four gene expression cassettes with graded protein expression levels were applied to the expression of three cellulases: cellobiohydrolase 1, cellobiohydrolase 2, and endoglucanase 2. After combinatorial screening for transgenic yeasts simultaneously secreting these three cellulases, we obtained strains with higher cellulase expressions than a strain harboring three cellulase-expression constructs within one high-performance gene expression cassette. These results show that our method will be of broad use throughout the field of metabolic engineering. PMID:26692026
Combinatorial Strategies for the Development of Bulk Metallic Glasses
NASA Astrophysics Data System (ADS)
Ding, Shiyan
The systematic identification of multi-component alloys out of the vast composition space is still a daunting task, especially in the development of bulk metallic glasses that are typically based on three or more elements. In order to address this challenge, combinatorial approaches have been proposed. However, previous attempts have not successfully coupled the synthesis of combinatorial libraries with high-throughput characterization methods. The goal of my dissertation is to develop efficient high-throughput characterization methods, optimized to identify glass formers systematically. Here, two innovative approaches have been invented. One is to measure the nucleation temperature in parallel for up-to 800 compositions. The composition with the lowest nucleation temperature has a reasonable agreement with the best-known glass forming composition. In addition, the thermoplastic formability of a metallic glass forming system is determined through blow molding a compositional library. Our results reveal that the composition with the largest thermoplastic deformation correlates well with the best-known formability composition. I have demonstrated both methods as powerful tools to develop new bulk metallic glasses.
Bioengineering Strategies for Designing Targeted Cancer Therapies
Wen, Xuejun
2014-01-01
The goals of bioengineering strategies for targeted cancer therapies are (1) to deliver a high dose of an anticancer drug directly to a cancer tumor, (2) to enhance drug uptake by malignant cells, and (3) to minimize drug uptake by nonmalignant cells. Effective cancer-targeting therapies will require both passive- and active targeting strategies and a thorough understanding of physiologic barriers to targeted drug delivery. Designing a targeted therapy includes the selection and optimization of a nanoparticle delivery vehicle for passive accumulation in tumors, a targeting moiety for active receptor-mediated uptake, and stimuli-responsive polymers for control of drug release. The future direction of cancer targeting is a combinatorial approach, in which targeting therapies are designed to use multiple targeting strategies. The combinatorial approach will enable combination therapy for delivery of multiple drugs and dual ligand targeting to improve targeting specificity. Targeted cancer treatments in development and the new combinatorial approaches show promise for improving targeted anticancer drug delivery and improving treatment outcomes. PMID:23768509
Sukkeaw, Wittawat; Kritpet, Thanomwong; Bunyaratavej, Narong
2015-09-01
To compare the effects of aerobic dance training on mini-trampoline and hard wooden surface on bone resorption, health-related physical fitness, balance, and foot plantar pressure in Thai working women. Sixty-three volunteered females aged 35-45 years old participated in the study and were divided into 3 groups: A) aerobic dance on mini-trampoline (21 females), B) aerobic dance on hard wooden surface (21 females), and C) control group (21 females). All subjects in the aerobic dance groups wore heart rate monitors during exercise. Aerobic dance worked out 3 times a week, 40 minutes a day for 12 weeks. The intensity was set at 60-80% of the maximum heart rate. The control group engaged in routine physical activity. The collected data were bone formation (N-terminal propeptine of procollagen type I: P1NP) bone resorption (Telopeptide cross linked: β-CrossLaps) health-related physical fitness, balance, and foot plantar pressure. The obtained data from pre- and post trainings were compared and analyzed by paired samples t-test and one way analysis of covariance. The significant difference was at 0.05 level. After the 12-week training, the biochemical bone markers of both mini-trampoline and hard wooden surface aerobic dance training subjects decreased in bone resorption (β-CrossLaps) but increased in boneformation (P1NP). Health-related physical fitness, balance, and foot plantar pressure were not only better when comparing to the pre-test result but also significantly different when comparing to the control group (p < 0.05). The aerobic dance on mini-trampoline showed that leg muscular strength, balance and foot plantar pressure were significantly better than the aerobic dance on hard wooden surface (p < 0.05). The aerobic dance on mini-trampoline and hard wooden surface had positive effects on biochemical bone markers. However, the aerobic dance on mini-trampoline had more leg muscular strength and balance including less foot plantar pressure. It is considered to be an appropriate exercise programs in working women.
Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun
2018-07-01
This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.
NASA Astrophysics Data System (ADS)
Petryk, Alicia A.; Misra, Adwiteeya; Kastner, Elliot J.; Mazur, Courtney M.; Petryk, James D.; Hoopes, P. Jack
2015-03-01
The use of hyperthermia to treat cancer is well studied and has utilized numerous delivery techniques, including microwaves, radio frequency, focused ultrasound, induction heating, infrared radiation, warmed perfusion liquids (combined with chemotherapy), and recently, metallic nanoparticles (NP) activated by near infrared radiation (NIR) and alternating magnetic field (AMF) based platforms. It has been demonstrated by many research groups that ablative temperatures and cytotoxicity can be produced with locally NP-based hyperthermia. Such ablative NP techniques have demonstrated the potential for success. Much attention has also been given to the fact that NP may be administered systemically, resulting in a broader cancer therapy approach, a lower level of tumor NP content and a different type of NP cancer therapy (most likely in the adjuvant setting). To use NP based hyperthermia successfully as a cancer treatment, the technique and its goal must be understood and utilized in the appropriate clinical context. The parameters include, but are not limited to, NP access to the tumor (large vs. small quantity), cancer cell-specific targeting, drug carrying capacity, potential as an ionizing radiation sensitizer, and the material properties (magnetic characteristics, size and charge). In addition to their potential for cytotoxicity, the material properties of the NP must also be optimized for imaging, detection and direction. In this paper we will discuss the differences between, and potential applications for, ablative and non-ablative magnetic nanoparticle hyperthermia.
Xia, Futing; Zhu, Hua
2012-02-01
Density functional theory calculations have been used to investigate the intra-molecular attack of 2'-hydroxypropyl-p-nitrophenyl phosphate (HPpNP) and its analogous compound 2-thiouridyl-p-nitrophenyl phosphate (s-2'pNP). Bulk solvent effect has been tested at the geometry optimization level with the polarized continuum model. It is found that the P-path involving the intra-molecular attack at the phosphorus atom and C-path involving the attack at the beta carbon atom proceed through the S(N)2-type mechanism for HPpNP and s-2'pNP. The calculated results indicate that the P-path with the free energy barrier of about 11 kcal/mol is more accessible than the C-path for the intra-molecular attack of HPpNP, which favors the formation of the five-membered phosphate diester. While for s-2'pNP, the C-path with the free energy barrier of about 21 kcal/mol proceeds more favorably than the P-path. The calculated energy barriers of the favorable pathways for HPpNP and s-2'pNP are both in agreement with the experimental results. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
Study on surface-enhanced Raman scattering efficiency of Ag core-Au shell bimetallic nanoparticles
NASA Astrophysics Data System (ADS)
Dong, Xiao; Gu, Huaimin; Kang, Jian; Yuan, Xiaojuan
2009-08-01
In this article, the relationship between the states of Ag core-Au shell (core-shell) nanoparticles (NP) and the intensity of Raman scattering of analytes dissolved in the water and adsorbed on the NP was studied. The core-shell NP were synthesised by coating Au layers over Ag seeds by the method of "seed-growth". To highlight the advantage of the core-shell NP, Ag colloid and Au colloid were chosen for contrasting. The analyte that were chosen for this testing were methylene blue (MB) for the reason that MB has very strong signal in surface-enhanced Raman scattering (SERS). The SERS activity of optimalizing states of Ag and Au colloids were compared with that of core-shell NP when MB was used as analyte. In this study, sodium chloride, sodium sulfate and sodium nitrate were used as aggregating agents for Ag, Au colloids and core-shell NP, because anions have a strong influence on the SERS efficiency and the stability of colloids. The results indicate that core-shell NP can obviously enhance the SERS of MB. The aim of this study is to prove that compared with the metal colloid, the core-shell NP is a high efficiency SERS active substrate.
Synthesis of polymer-lipid nanoparticles for image-guided delivery of dual modality therapy.
Mieszawska, Aneta J; Kim, YongTae; Gianella, Anita; van Rooy, Inge; Priem, Bram; Labarre, Matthew P; Ozcan, Canturk; Cormode, David P; Petrov, Artiom; Langer, Robert; Farokhzad, Omid C; Fayad, Zahi A; Mulder, Willem J M
2013-09-18
For advanced treatment of diseases such as cancer, multicomponent, multifunctional nanoparticles hold great promise. In the current study we report the synthesis of a complex nanoparticle (NP) system with dual drug loading as well as diagnostic properties. To that aim we present a methodology where chemically modified poly(lactic-co-glycolic) acid (PLGA) polymer is formulated into a polymer-lipid NP that contains a cytotoxic drug doxorubicin (DOX) in the polymeric core and an anti-angiogenic drug sorafenib (SRF) in the lipidic corona. The NP core also contains gold nanocrystals (AuNCs) for imaging purposes and cyclodextrin molecules to maximize the DOX encapsulation in the NP core. In addition, a near-infrared (NIR) Cy7 dye was incorporated in the coating. To fabricate the NP we used a microfluidics-based technique that offers unique NP synthesis conditions, which allowed for encapsulation and fine-tuning of optimal ratios of all the NP components. NP phantoms could be visualized with computed tomography (CT) and near-infrared (NIR) fluorescence imaging. We observed timed release of the encapsulated drugs, with fast release of the corona drug SRF and delayed release of a core drug DOX. In tumor bearing mice intravenously administered NPs were found to accumulate at the tumor site by fluorescence imaging.
Altenburg, A F; Magnusson, S E; Bosman, F; Stertman, L; de Vries, R D; Rimmelzwaan, G F
2017-10-01
Because of the high variability of seasonal influenza viruses and the eminent threat of influenza viruses with pandemic potential, there is great interest in the development of vaccines that induce broadly protective immunity. Most probably, broadly protective influenza vaccines are based on conserved proteins, such as nucleoprotein (NP). NP is a vaccine target of interest as it has been shown to induce cross-reactive antibody and T cell responses. Here we tested and compared various NP-based vaccine preparations for their capacity to induce humoral and cellular immune responses to influenza virus NP. The immunogenicity of protein-based vaccine preparations with Matrix-M™ adjuvant as well as recombinant viral vaccine vector modified Vaccinia virus Ankara (MVA) expressing the influenza virus NP gene, with or without modifications that aim at optimization of CD8 + T cell responses, was addressed in BALB/c mice. Addition of Matrix-M™ adjuvant to NP wild-type protein-based vaccines significantly improved T cell responses. Furthermore, recombinant MVA expressing the influenza virus NP induced strong antibody and CD8 + T cell responses, which could not be improved further by modifications of NP to increase antigen processing and presentation. © 2017 British Society for Immunology.
Finding the probability of infection in an SIR network is NP-Hard
Shapiro, Michael; Delgado-Eckert, Edgar
2012-01-01
It is the purpose of this article to review results that have long been known to communications network engineers and have direct application to epidemiology on networks. A common approach in epidemiology is to study the transmission of a disease in a population where each individual is initially susceptible (S), may become infective (I) and then removed or recovered (R) and plays no further epidemiological role. Much of the recent work gives explicit consideration to the network of social interactions or disease-transmitting contacts and attendant probability of transmission for each interacting pair. The state of such a network is an assignment of the values {S, I, R} to its members. Given such a network, an initial state and a particular susceptible individual, we would like to compute their probability of becoming infected in the course of an epidemic. It turns out that this and related problems are NP-hard. In particular, it belongs in a class of problems for which no efficient algorithms for their solution are known. Moreover, finding an efficient algorithm for the solution of any problem in this class would entail a major breakthrough in theoretical computer science. PMID:22824138
Kinetics of the formation of a protein corona around nanoparticles.
Zhdanov, Vladimir P; Cho, Nam-Joon
2016-12-01
Interaction of metal or oxide nanoparticles (NPs) with biological soft matter is one of the central phenomena in basic and applied biology-oriented nanoscience. Often, this interaction includes adsorption of suspended proteins on the NP surface, resulting in the formation of the protein corona around NPs. Structurally, the corona contains a "hard" monolayer shell directly contacting a NP and a more distant weakly associated "soft" shell. Chemically, the corona is typically composed of a mixture of distinct proteins. The corresponding experimental and theoretical studies have already clarified many aspects of the corona formation. The process is, however, complex, and its understanding is still incomplete. Herein, we present a kinetic mean-field model of the formation of the "hard" corona with emphasis on the role of (i) protein-diffusion limitations and (ii) interplay between competitive adsorption of distinct proteins and irreversible reconfiguration of their native structure. The former factor is demonstrated to be significant only in the very beginning of the corona formation. The latter factor is predicted to be more important. It may determine the composition of the corona on the time scales comparable or longer than a few hours. Copyright © 2016 Elsevier Inc. All rights reserved.
Carbon Nanotubes by CVD and Applications
NASA Technical Reports Server (NTRS)
Cassell, Alan; Delzeit, Lance; Nguyen, Cattien; Stevens, Ramsey; Han, Jie; Meyyappan, M.; Arnold, James O. (Technical Monitor)
2001-01-01
Carbon nanotube (CNT) exhibits extraordinary mechanical and unique electronic properties and offers significant potential for structural, sensor, and nanoelectronics applications. An overview of CNT, growth methods, properties and applications is provided. Single-wall, and multi-wall CNTs have been grown by chemical vapor deposition. Catalyst development and optimization has been accomplished using combinatorial optimization methods. CNT has also been grown from the tips of silicon cantilevers for use in atomic force microscopy.
NASA Astrophysics Data System (ADS)
Youl Jung, Kyeong
2010-08-01
Conventional solution-based combinatorial chemistry was combined with spray pyrolysis and applied to optimize the luminescence properties of (Y x, Gd y, Al z)BO 3:Eu 3+ red phosphor under vacuum ultraviolet (VUV) excitation. For the Y-Gd-Al ternary system, a compositional library was established to seek the optimal composition at which the highest luminescence under VUV (147 nm) excitation could be achieved. The Al content was found to mainly control the relative peak ratio (R/O) of red and orange colors due to the 5D 0→ 7F 2 to 5D 0→ 7F 1 transitions of Eu 3+. The substitution of Gd atoms in the place of Y sites did not contribute to change the R/O ratio, but was helpful to enhance the emission intensity. As a result, the 613 nm emission peak due to the 5D 0→ 7F 2 transitions of Eu 3+ was intensified by increasing the Al/Gd ratio at a fixed Y content, resulting in the improvement of the color coordinate. Finally, the optimized host composition was (Y 0.11, Gd 0.10, Al 0.79)BO 3 in terms of the emission intensity at 613 nm and the color coordinate.
Legrand, Yves-Marie; van der Lee, Arie; Barboiu, Mihail
2007-11-12
In this paper we report an extended series of 2,6-(iminoarene)pyridine-type ZnII complexes [(Lii)2Zn]II, which were surveyed for their ability to self-exchange both their ligands and their aromatic arms and to form different homoduplex and heteroduplex complexes in solution. The self-sorting of heteroduplex complexes is likely to be the result of geometric constraints. Whereas the imine-exchange process occurs quantitatively in 1:1 mixtures of [(Lii)2Zn]II complexes, the octahedral coordination process around the metal ion defines spatial-frustrated exchanges that involve the selective formation of heterocomplexes of two, by two different substituents; the bulkiest ones (pyrene in principle) specifically interact with the pseudoterpyridine core, sterically hindering the least bulky ones, which are intermolecularly stacked with similar ligands of neighboring molecules. Such a self-sorting process defined by the specific self-constitution of the ligands exchanging their aromatic substituents is self-optimized by a specific control over their spatial orientation around a metal center within the complex. They ultimately show an improved charge-transfer energy function by virtue of the dynamic amplification of self-optimized heteroduplex architectures. These systems therefore illustrate the convergence of the combinatorial self-sorting of the dynamic combinatorial libraries (DCLs) strategy and the constitutional self-optimized function.
A Systems Analysis View of the Vietnam War: 1965-1972. Volume 2. Forces and Manpower
1975-02-18
since it will form the basis for much co~ment and discussion about how wen the war is going and how hard the VC are being pushed. The study was...infiltrators into the South rapidly. Such a move would be itenr depafture froa past infiltration patterns and would signal no chana In Hanoi’s hard line...Uwel " 83,82 83,880 0S~ 59r9 I 8q261.1149 Tvta 540576 .647/ 53,14 1,819 Tot •eal Np . as of Argust 1, 3968, leos civilaalaation coWleted as of June 30
Vasiliu, Tudor; Cojocaru, Corneliu; Rotaru, Alexandru; Pricope, Gabriela; Pinteala, Mariana; Clima, Lilia
2017-06-17
The polyplexes formed by nucleic acids and polycations have received a great attention owing to their potential application in gene therapy. In our study, we report experimental results and modeling outcomes regarding the optimization of polyplex formation between the double-stranded DNA (dsDNA) and poly(ʟ-Lysine) (PLL). The quantification of the binding efficiency during polyplex formation was performed by processing of the images captured from the gel electrophoresis assays. The design of experiments (DoE) and response surface methodology (RSM) were employed to investigate the coupling effect of key factors (pH and N/P ratio) affecting the binding efficiency. According to the experimental observations and response surface analysis, the N/P ratio showed a major influence on binding efficiency compared to pH. Model-based optimization calculations along with the experimental confirmation runs unveiled the maximal binding efficiency (99.4%) achieved at pH 5.4 and N/P ratio 125. To support the experimental data and reveal insights of molecular mechanism responsible for the polyplex formation between dsDNA and PLL, molecular dynamics simulations were performed at pH 5.4 and 7.4.
Vasiliu, Tudor; Cojocaru, Corneliu; Rotaru, Alexandru; Pricope, Gabriela; Pinteala, Mariana; Clima, Lilia
2017-01-01
The polyplexes formed by nucleic acids and polycations have received a great attention owing to their potential application in gene therapy. In our study, we report experimental results and modeling outcomes regarding the optimization of polyplex formation between the double-stranded DNA (dsDNA) and poly(l-Lysine) (PLL). The quantification of the binding efficiency during polyplex formation was performed by processing of the images captured from the gel electrophoresis assays. The design of experiments (DoE) and response surface methodology (RSM) were employed to investigate the coupling effect of key factors (pH and N/P ratio) affecting the binding efficiency. According to the experimental observations and response surface analysis, the N/P ratio showed a major influence on binding efficiency compared to pH. Model-based optimization calculations along with the experimental confirmation runs unveiled the maximal binding efficiency (99.4%) achieved at pH 5.4 and N/P ratio 125. To support the experimental data and reveal insights of molecular mechanism responsible for the polyplex formation between dsDNA and PLL, molecular dynamics simulations were performed at pH 5.4 and 7.4. PMID:28629130
Exact Algorithms for Duplication-Transfer-Loss Reconciliation with Non-Binary Gene Trees.
Kordi, Misagh; Bansal, Mukul S
2017-06-01
Duplication-Transfer-Loss (DTL) reconciliation is a powerful method for studying gene family evolution in the presence of horizontal gene transfer. DTL reconciliation seeks to reconcile gene trees with species trees by postulating speciation, duplication, transfer, and loss events. Efficient algorithms exist for finding optimal DTL reconciliations when the gene tree is binary. In practice, however, gene trees are often non-binary due to uncertainty in the gene tree topologies, and DTL reconciliation with non-binary gene trees is known to be NP-hard. In this paper, we present the first exact algorithms for DTL reconciliation with non-binary gene trees. Specifically, we (i) show that the DTL reconciliation problem for non-binary gene trees is fixed-parameter tractable in the maximum degree of the gene tree, (ii) present an exponential-time, but in-practice efficient, algorithm to track and enumerate all optimal binary resolutions of a non-binary input gene tree, and (iii) apply our algorithms to a large empirical data set of over 4700 gene trees from 100 species to study the impact of gene tree uncertainty on DTL-reconciliation and to demonstrate the applicability and utility of our algorithms. The new techniques and algorithms introduced in this paper will help biologists avoid incorrect evolutionary inferences caused by gene tree uncertainty.
MANGO: a new approach to multiple sequence alignment.
Zhang, Zefeng; Lin, Hao; Li, Ming
2007-01-01
Multiple sequence alignment is a classical and challenging task for biological sequence analysis. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state of the art multiple sequence alignment programs suffer from the 'once a gap, always a gap' phenomenon. Is there a radically new way to do multiple sequence alignment? This paper introduces a novel and orthogonal multiple sequence alignment method, using multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds are provably significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks showing that MANGO compares favorably, in both accuracy and speed, against state-of-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, Prob-ConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0 and Kalign 2.0.
Applying graph partitioning methods in measurement-based dynamic load balancing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhatele, Abhinav; Fourestier, Sebastien; Menon, Harshitha
Load imbalance leads to an increasing waste of resources as an application is scaled to more and more processors. Achieving the best parallel efficiency for a program requires optimal load balancing which is a NP-hard problem. However, finding near-optimal solutions to this problem for complex computational science and engineering applications is becoming increasingly important. Charm++, a migratable objects based programming model, provides a measurement-based dynamic load balancing framework. This framework instruments and then migrates over-decomposed objects to balance computational load and communication at runtime. This paper explores the use of graph partitioning algorithms, traditionally used for partitioning physical domains/meshes, formore » measurement-based dynamic load balancing of parallel applications. In particular, we present repartitioning methods developed in a graph partitioning toolbox called SCOTCH that consider the previous mapping to minimize migration costs. We also discuss a new imbalance reduction algorithm for graphs with irregular load distributions. We compare several load balancing algorithms using microbenchmarks on Intrepid and Ranger and evaluate the effect of communication, number of cores and number of objects on the benefit achieved from load balancing. New algorithms developed in SCOTCH lead to better performance compared to the METIS partitioners for several cases, both in terms of the application execution time and fewer number of objects migrated.« less
NASA Astrophysics Data System (ADS)
Cheng, Xiao; Feng, Lei; Zhou, Fanqin; Wei, Lei; Yu, Peng; Li, Wenjing
2018-02-01
With the rapid development of the smart grid, the data aggregation point (AP) in the neighborhood area network (NAN) is becoming increasingly important for forwarding the information between the home area network and wide area network. Due to limited budget, it is unable to use one-single access technology to meet the ongoing requirements on AP coverage. This paper first introduces the wired and wireless hybrid access network with the integration of long-term evolution (LTE) and passive optical network (PON) system for NAN, which allows a good trade-off among cost, flexibility, and reliability. Then, based on the already existing wireless LTE network, an AP association optimization model is proposed to make the PON serve as many APs as possible, considering both the economic efficiency and network reliability. Moreover, since the features of the constraints and variables of this NP-hard problem, a hybrid intelligent optimization algorithm is proposed, which is achieved by the mixture of the genetic, ant colony and dynamic greedy algorithm. By comparing with other published methods, simulation results verify the performance of the proposed method in improving the AP coverage and the performance of the proposed algorithm in terms of convergence.
NASA Astrophysics Data System (ADS)
Liu, Jingfa; Song, Beibei; Liu, Zhaoxia; Huang, Weibo; Sun, Yuanyuan; Liu, Wenjie
2013-11-01
Protein structure prediction (PSP) is a classical NP-hard problem in computational biology. The energy-landscape paving (ELP) method is a class of heuristic global optimization algorithm, and has been successfully applied to solving many optimization problems with complex energy landscapes in the continuous space. By putting forward a new update mechanism of the histogram function in ELP and incorporating the generation of initial conformation based on the greedy strategy and the neighborhood search strategy based on pull moves into ELP, an improved energy-landscape paving (ELP+) method is put forward. Twelve general benchmark instances are first tested on both two-dimensional and three-dimensional (3D) face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice models. The lowest energies by ELP+ are as good as or better than those of other methods in the literature for all instances. Then, five sets of larger-scale instances, denoted by S, R, F90, F180, and CASP target instances on the 3D FCC HP lattice model are tested. The proposed algorithm finds lower energies than those by the five other methods in literature. Not unexpectedly, this is particularly pronounced for the longer sequences considered. Computational results show that ELP+ is an effective method for PSP on the fcc HP lattice model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsui, Hiroshi
Work is reported in these areas: Large-scale & reconfigurable 3D structures of precise nanoparticle assemblies in self-assembled collagen peptide grids; Binary QD-Au NP 3D superlattices assembled with collagen-like peptides and energy transfer between QD and Au NP in 3D peptide frameworks; Catalytic peptides discovered by new hydrogel-based combinatorial phage display approach and their enzyme-mimicking 2D assembly; New autonomous motors of metal-organic frameworks (MOFs) powered by reorganization of self-assembled peptides at interfaces; Biomimetic assembly of proteins into microcapsules on oil-in-water droplets with structural reinforcement via biomolecular recognition-based cross-linking of surface peptides; and Biomimetic fabrication of strong freestanding genetically-engineered collagen peptide filmsmore » reinforced by quantum dot joints. We gained the broad knowledge about biomimetic material assembly from nanoscale to microscale ranges by coassembling peptides and NPs via biomolecular recognition. We discovered: Genetically-engineered collagen-like peptides can be self-assembled with Au NPs to generate 3D superlattices in large volumes (> μm{sup 3}); The assembly of the 3D peptide-Au NP superstructures is dynamic and the interparticle distance changes with assembly time as the reconfiguration of structure is triggered by pH change; QDs/NPs can be assembled with the peptide frameworks to generate 3D superlattices and these QDs/NPs can be electronically coupled for the efficient energy transfer; The controlled assembly of catalytic peptides mimicking the catalytic pocket of enzymes can catalyze chemical reactions with high selectivity; and, For the bacteria-mimicking swimmer fabrication, peptide-MOF superlattices can power translational and propellant motions by the reconfiguration of peptide assembly at the MOF-liquid interface.« less
Kamaly, Nazila; Fredman, Gabrielle; Fojas, Jhalique Jane R; Subramanian, Manikandan; Choi, Won Ii; Zepeda, Katherine; Vilos, Cristian; Yu, Mikyung; Gadde, Suresh; Wu, Jun; Milton, Jaclyn; Carvalho Leitao, Renata; Rosa Fernandes, Livia; Hasan, Moaraj; Gao, Huayi; Nguyen, Vance; Harris, Jordan; Tabas, Ira; Farokhzad, Omid C
2016-05-24
Inflammation is an essential protective biological response involving a coordinated cascade of signals between cytokines and immune signaling molecules that facilitate return to tissue homeostasis after acute injury or infection. However, inflammation is not effectively resolved in chronic inflammatory diseases such as atherosclerosis and can lead to tissue damage and exacerbation of the underlying condition. Therapeutics that dampen inflammation and enhance resolution are currently of considerable interest, in particular those that temper inflammation with minimal host collateral damage. Here we present the development and efficacy investigations of controlled-release polymeric nanoparticles incorporating the anti-inflammatory cytokine interleukin 10 (IL-10) for targeted delivery to atherosclerotic plaques. Nanoparticles were nanoengineered via self-assembly of biodegradable polyester polymers by nanoprecipitation using a rapid micromixer chip capable of producing nanoparticles with retained IL-10 bioactivity post-exposure to organic solvent. A systematic combinatorial approach was taken to screen nanoparticles, resulting in an optimal bioactive formulation from in vitro and ex vivo studies. The most potent nanoparticle termed Col-IV IL-10 NP22 significantly tempered acute inflammation in a self-limited peritonitis model and was shown to be more potent than native IL-10. Furthermore, the Col-IV IL-10 nanoparticles prevented vulnerable plaque formation by increasing fibrous cap thickness and decreasing necrotic cores in advanced lesions of high fat-fed LDLr(-/-) mice. These results demonstrate the efficacy and pro-resolving potential of this engineered nanoparticle for controlled delivery of the potent IL-10 cytokine for the treatment of atherosclerosis.
Luo, Shusheng; Fang, Ling; Wang, Xiaowei; Liu, Hongtao; Ouyang, Gangfeng; Lan, Chongyu; Luan, Tiangang
2010-10-22
A simple and fast sample preparation method for the determination of nonylphenol (NP) and octylphenol (OP) in aqueous samples by simultaneous derivatization and dispersive liquid-liquid microextraction (DLLME) was investigated using gas chromatography-mass spectrometry (GC/MS). In this method, a combined dispersant/derivatization catalyst (methanol/pyridine mixture) was firstly added to an aqueous sample, following which a derivatization reagent/extraction solvent (methyl chloroformate/chloroform) was rapidly injected to combine in situ derivatization and extraction in a single step. After centrifuging, the sedimented phase containing the analytes was injected into the GC port by autosampler for analysis. Several parameters, such as extraction solvent, dispersant solvent, amount of derivatization reagent, derivatization and extraction time, pH, and ionic strength were optimized to obtain higher sensitivity for the detection of NP and OP. Under the optimized conditions, good linearity was observed in the range of 0.1-1000 μg L⁻¹ and 0.01-100 μg L⁻¹ with the limits of detection (LOD) of 0.03 μg L⁻¹ and 0.002 μg L⁻¹ for NP and OP, respectively. Water samples collected from the Pearl River were analyzed with the proposed method, the concentrations of NP and OP were found to be 2.40 ± 0.16 μg L⁻¹ and 0.037 ± 0.001 μg L⁻¹, respectively. The relative recoveries of the water samples spiked with different concentrations of NP and OP were in the range of 88.3-106.7%. Compared with SPME and SPE, the proposed method can be successfully applied to the rapid and convenient determination of NP and OP in aqueous samples. Copyright © 2010 Elsevier B.V. All rights reserved.
Surface-enhanced Raman scattering from AgNP-graphene-AgNP sandwiched nanostructures
NASA Astrophysics Data System (ADS)
Wu, Jian; Xu, Yijun; Xu, Pengyu; Pan, Zhenghui; Chen, Sheng; Shen, Qishen; Zhan, Li; Zhang, Yuegang; Ni, Weihai
2015-10-01
We developed a facile approach toward hybrid AgNP-graphene-AgNP sandwiched structures using self-organized monolayered AgNPs from wet chemical synthesis for the optimized enhancement of the Raman response of monolayer graphene. We demonstrate that the Raman scattering of graphene can be enhanced 530 fold in the hybrid structure. The Raman enhancement is sensitively dependent on the hybrid structure, incident angle, and excitation wavelength. A systematic simulation is performed, which well explains the enhancement mechanism. Our study indicates that the enhancement resulted from the plasmonic coupling between the AgNPs on the opposite sides of graphene. Our approach towards ideal substrates offers great potential to produce a ``hot surface'' for enhancing the Raman response of two-dimensional materials.We developed a facile approach toward hybrid AgNP-graphene-AgNP sandwiched structures using self-organized monolayered AgNPs from wet chemical synthesis for the optimized enhancement of the Raman response of monolayer graphene. We demonstrate that the Raman scattering of graphene can be enhanced 530 fold in the hybrid structure. The Raman enhancement is sensitively dependent on the hybrid structure, incident angle, and excitation wavelength. A systematic simulation is performed, which well explains the enhancement mechanism. Our study indicates that the enhancement resulted from the plasmonic coupling between the AgNPs on the opposite sides of graphene. Our approach towards ideal substrates offers great potential to produce a ``hot surface'' for enhancing the Raman response of two-dimensional materials. Electronic supplementary information (ESI) available: Additional SEM images, electric field enhancement profiles, Raman scattering spectra, and structure-dependent peak ratios. See DOI: 10.1039/c5nr04500b
Global gene expression analysis by combinatorial optimization.
Ameur, Adam; Aurell, Erik; Carlsson, Mats; Westholm, Jakub Orzechowski
2004-01-01
Generally, there is a trade-off between methods of gene expression analysis that are precise but labor-intensive, e.g. RT-PCR, and methods that scale up to global coverage but are not quite as quantitative, e.g. microarrays. In the present paper, we show how how a known method of gene expression profiling (K. Kato, Nucleic Acids Res. 23, 3685-3690 (1995)), which relies on a fairly small number of steps, can be turned into a global gene expression measurement by advanced data post-processing, with potentially little loss of accuracy. Post-processing here entails solving an ancillary combinatorial optimization problem. Validation is performed on in silico experiments generated from the FANTOM data base of full-length mouse cDNA. We present two variants of the method. One uses state-of-the-art commercial software for solving problems of this kind, the other a code developed by us specifically for this purpose, released in the public domain under GPL license.