Sample records for classical optimization problem

  1. Efficiency of quantum vs. classical annealing in nonconvex learning problems

    PubMed Central

    Zecchina, Riccardo

    2018-01-01

    Quantum annealers aim at solving nonconvex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists of designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of nonconvex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes are dominated by local minima that cause exponential slowdown of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions. PMID:29382764

  2. Optimization in First Semester Calculus: A Look at a Classic Problem

    ERIC Educational Resources Information Center

    LaRue, Renee; Infante, Nicole Engelke

    2015-01-01

    Optimization problems in first semester calculus have historically been a challenge for students. Focusing on the classic optimization problem of finding the minimum amount of fencing required to enclose a fixed area, we examine students' activity through the lens of Tall and Vinner's concept image and Carlson and Bloom's multidimensional…

  3. Unraveling Quantum Annealers using Classical Hardness

    PubMed Central

    Martin-Mayor, Victor; Hen, Itay

    2015-01-01

    Recent advances in quantum technology have led to the development and manufacturing of experimental programmable quantum annealing optimizers that contain hundreds of quantum bits. These optimizers, commonly referred to as ‘D-Wave’ chips, promise to solve practical optimization problems potentially faster than conventional ‘classical’ computers. Attempts to quantify the quantum nature of these chips have been met with both excitement and skepticism but have also brought up numerous fundamental questions pertaining to the distinguishability of experimental quantum annealers from their classical thermal counterparts. Inspired by recent results in spin-glass theory that recognize ‘temperature chaos’ as the underlying mechanism responsible for the computational intractability of hard optimization problems, we devise a general method to quantify the performance of quantum annealers on optimization problems suffering from varying degrees of temperature chaos: A superior performance of quantum annealers over classical algorithms on these may allude to the role that quantum effects play in providing speedup. We utilize our method to experimentally study the D-Wave Two chip on different temperature-chaotic problems and find, surprisingly, that its performance scales unfavorably as compared to several analogous classical algorithms. We detect, quantify and discuss several purely classical effects that possibly mask the quantum behavior of the chip. PMID:26483257

  4. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    PubMed

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  5. Dynamic optimization and its relation to classical and quantum constrained systems

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo

    2017-08-01

    We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.

  6. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  7. Constrained Optimal Transport

    NASA Astrophysics Data System (ADS)

    Ekren, Ibrahim; Soner, H. Mete

    2018-03-01

    The classical duality theory of Kantorovich (C R (Doklady) Acad Sci URSS (NS) 37:199-201, 1942) and Kellerer (Z Wahrsch Verw Gebiete 67(4):399-432, 1984) for classical optimal transport is generalized to an abstract framework and a characterization of the dual elements is provided. This abstract generalization is set in a Banach lattice X with an order unit. The problem is given as the supremum over a convex subset of the positive unit sphere of the topological dual of X and the dual problem is defined on the bi-dual of X. These results are then applied to several extensions of the classical optimal transport.

  8. First-order design of geodetic networks using the simulated annealing method

    NASA Astrophysics Data System (ADS)

    Berné, J. L.; Baselga, S.

    2004-09-01

    The general problem of the optimal design for a geodetic network subject to any extrinsic factors, namely the first-order design problem, can be dealt with as a numeric optimization problem. The classic theory of this problem and the optimization methods are revised. Then the innovative use of the simulated annealing method, which has been successfully applied in other fields, is presented for this classical geodetic problem. This method, belonging to iterative heuristic techniques in operational research, uses a thermodynamical analogy to crystalline networks to offer a solution that converges probabilistically to the global optimum. Basic formulation and some examples are studied.

  9. A device-oriented optimizer for solving ground state problems on an approximate quantum computer, Part II: Experiments for interacting spin and molecular systems

    NASA Astrophysics Data System (ADS)

    Kandala, Abhinav; Mezzacapo, Antonio; Temme, Kristan; Bravyi, Sergey; Takita, Maika; Chavez-Garcia, Jose; Córcoles, Antonio; Smolin, John; Chow, Jerry; Gambetta, Jay

    Hybrid quantum-classical algorithms can be used to find variational solutions to generic quantum problems. Here, we present an experimental implementation of a device-oriented optimizer that uses superconducting quantum hardware. The experiment relies on feedback between the quantum device and classical optimization software which is robust to measurement noise. Our device-oriented approach uses naturally available interactions for the preparation of trial states. We demonstrate the application of this technique for solving interacting spin and molecular structure problems.

  10. Quantum algorithm for energy matching in hard optimization problems

    NASA Astrophysics Data System (ADS)

    Baldwin, C. L.; Laumann, C. R.

    2018-06-01

    We consider the ability of local quantum dynamics to solve the "energy-matching" problem: given an instance of a classical optimization problem and a low-energy state, find another macroscopically distinct low-energy state. Energy matching is difficult in rugged optimization landscapes, as the given state provides little information about the distant topography. Here, we show that the introduction of quantum dynamics can provide a speedup over classical algorithms in a large class of hard optimization problems. Tunneling allows the system to explore the optimization landscape while approximately conserving the classical energy, even in the presence of large barriers. Specifically, we study energy matching in the random p -spin model of spin-glass theory. Using perturbation theory and exact diagonalization, we show that introducing a transverse field leads to three sharp dynamical phases, only one of which solves the matching problem: (1) a small-field "trapped" phase, in which tunneling is too weak for the system to escape the vicinity of the initial state; (2) a large-field "excited" phase, in which the field excites the system into high-energy states, effectively forgetting the initial energy; and (3) the intermediate "tunneling" phase, in which the system succeeds at energy matching. The rate at which distant states are found in the tunneling phase, although exponentially slow in system size, is exponentially faster than classical search algorithms.

  11. Benchmarking the D-Wave Two

    NASA Astrophysics Data System (ADS)

    Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel

    2014-03-01

    We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.

  12. A Problem on Optimal Transportation

    ERIC Educational Resources Information Center

    Cechlarova, Katarina

    2005-01-01

    Mathematical optimization problems are not typical in the classical curriculum of mathematics. In this paper we show how several generalizations of an easy problem on optimal transportation were solved by gifted secondary school pupils in a correspondence mathematical seminar, how they can be used in university courses of linear programming and…

  13. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  14. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem

    PubMed Central

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585

  15. A Biogeography-Based Optimization Algorithm Hybridized with Tabu Search for the Quadratic Assignment Problem.

    PubMed

    Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah

    2016-01-01

    The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.

  16. Quantum-enhanced reinforcement learning for finite-episode games with discrete state spaces

    NASA Astrophysics Data System (ADS)

    Neukart, Florian; Von Dollen, David; Seidel, Christian; Compostella, Gabriele

    2017-12-01

    Quantum annealing algorithms belong to the class of metaheuristic tools, applicable for solving binary optimization problems. Hardware implementations of quantum annealing, such as the quantum annealing machines produced by D-Wave Systems, have been subject to multiple analyses in research, with the aim of characterizing the technology's usefulness for optimization and sampling tasks. Here, we present a way to partially embed both Monte Carlo policy iteration for finding an optimal policy on random observations, as well as how to embed n sub-optimal state-value functions for approximating an improved state-value function given a policy for finite horizon games with discrete state spaces on a D-Wave 2000Q quantum processing unit (QPU). We explain how both problems can be expressed as a quadratic unconstrained binary optimization (QUBO) problem, and show that quantum-enhanced Monte Carlo policy evaluation allows for finding equivalent or better state-value functions for a given policy with the same number episodes compared to a purely classical Monte Carlo algorithm. Additionally, we describe a quantum-classical policy learning algorithm. Our first and foremost aim is to explain how to represent and solve parts of these problems with the help of the QPU, and not to prove supremacy over every existing classical policy evaluation algorithm.

  17. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    NASA Astrophysics Data System (ADS)

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  18. Ultimate open pit stochastic optimization

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Caron, Josiane

    2013-02-01

    Classical open pit optimization (maximum closure problem) is made on block estimates, without directly considering the block grades uncertainty. We propose an alternative approach of stochastic optimization. The stochastic optimization is taken as the optimal pit computed on the block expected profits, rather than expected grades, computed from a series of conditional simulations. The stochastic optimization generates, by construction, larger ore and waste tonnages than the classical optimization. Contrary to the classical approach, the stochastic optimization is conditionally unbiased for the realized profit given the predicted profit. A series of simulated deposits with different variograms are used to compare the stochastic approach, the classical approach and the simulated approach that maximizes expected profit among simulated designs. Profits obtained with the stochastic optimization are generally larger than the classical or simulated pit. The main factor controlling the relative gain of stochastic optimization compared to classical approach and simulated pit is shown to be the information level as measured by the boreholes spacing/range ratio. The relative gains of the stochastic approach over the classical approach increase with the treatment costs but decrease with mining costs. The relative gains of the stochastic approach over the simulated pit approach increase both with the treatment and mining costs. At early stages of an open pit project, when uncertainty is large, the stochastic optimization approach appears preferable to the classical approach or the simulated pit approach for fair comparison of the values of alternative projects and for the initial design and planning of the open pit.

  19. Derived heuristics-based consistent optimization of material flow in a gold processing plant

    NASA Astrophysics Data System (ADS)

    Myburgh, Christie; Deb, Kalyanmoy

    2018-01-01

    Material flow in a chemical processing plant often follows complicated control laws and involves plant capacity constraints. Importantly, the process involves discrete scenarios which when modelled in a programming format involves if-then-else statements. Therefore, a formulation of an optimization problem of such processes becomes complicated with nonlinear and non-differentiable objective and constraint functions. In handling such problems using classical point-based approaches, users often have to resort to modifications and indirect ways of representing the problem to suit the restrictions associated with classical methods. In a particular gold processing plant optimization problem, these facts are demonstrated by showing results from MATLAB®'s well-known fmincon routine. Thereafter, a customized evolutionary optimization procedure which is capable of handling all complexities offered by the problem is developed. Although the evolutionary approach produced results with comparatively less variance over multiple runs, the performance has been enhanced by introducing derived heuristics associated with the problem. In this article, the development and usage of derived heuristics in a practical problem are presented and their importance in a quick convergence of the overall algorithm is demonstrated.

  20. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  1. Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClean, Jarrod R.; Kimchi-Schwartz, Mollie E.; Carter, Jonathan

    Using quantum devices supported by classical computational resources is a promising approach to quantum-enabled computation. One powerful example of such a hybrid quantum-classical approach optimized for classically intractable eigenvalue problems is the variational quantum eigensolver, built to utilize quantum resources for the solution of eigenvalue problems and optimizations with minimal coherence time requirements by leveraging classical computational resources. These algorithms have been placed as leaders among the candidates for the first to achieve supremacy over classical computation. Here, we provide evidence for the conjecture that variational approaches can automatically suppress even nonsystematic decoherence errors by introducing an exactly solvable channelmore » model of variational state preparation. Moreover, we develop a more general hierarchy of measurement and classical computation that allows one to obtain increasingly accurate solutions by leveraging additional measurements and classical resources. In conclusion, we demonstrate numerically on a sample electronic system that this method both allows for the accurate determination of excited electronic states as well as reduces the impact of decoherence, without using any additional quantum coherence time or formal error-correction codes.« less

  2. Stable sequential Kuhn-Tucker theorem in iterative form or a regularized Uzawa algorithm in a regular nonlinear programming problem

    NASA Astrophysics Data System (ADS)

    Sumin, M. I.

    2015-06-01

    A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.

  3. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  4. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  5. QSPIN: A High Level Java API for Quantum Computing Experimentation

    NASA Technical Reports Server (NTRS)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  6. The Optimal Location of GEODSS Sensors in Canada

    DTIC Science & Technology

    1991-02-01

    nteractive procedures for solving multiobjective transportation problems. A transportation problem is a classical linear programming problem where a...product must be transported from each of m sources to any of n destinations such that one or more objectives are optimized (36:96). The first algorithm...0, k - 1,...,L where z, is the fth element of zk The function z’(x) can now be optimized using any efficient, single-objectivc transportation

  7. Experimental quantum annealing: case study involving the graph isomorphism problem.

    PubMed

    Zick, Kenneth M; Shehab, Omar; French, Matthew

    2015-06-08

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N(2) to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers.

  8. Experimental quantum annealing: case study involving the graph isomorphism problem

    PubMed Central

    Zick, Kenneth M.; Shehab, Omar; French, Matthew

    2015-01-01

    Quantum annealing is a proposed combinatorial optimization technique meant to exploit quantum mechanical effects such as tunneling and entanglement. Real-world quantum annealing-based solvers require a combination of annealing and classical pre- and post-processing; at this early stage, little is known about how to partition and optimize the processing. This article presents an experimental case study of quantum annealing and some of the factors involved in real-world solvers, using a 504-qubit D-Wave Two machine and the graph isomorphism problem. To illustrate the role of classical pre-processing, a compact Hamiltonian is presented that enables a reduced Ising model for each problem instance. On random N-vertex graphs, the median number of variables is reduced from N2 to fewer than N log2 N and solvable graph sizes increase from N = 5 to N = 13. Additionally, error correction via classical post-processing majority voting is evaluated. While the solution times are not competitive with classical approaches to graph isomorphism, the enhanced solver ultimately classified correctly every problem that was mapped to the processor and demonstrated clear advantages over the baseline approach. The results shed some light on the nature of real-world quantum annealing and the associated hybrid classical-quantum solvers. PMID:26053973

  9. Dynamic programming and graph algorithms in computer vision.

    PubMed

    Felzenszwalb, Pedro F; Zabih, Ramin

    2011-04-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.

  10. Necessary and sufficient optimality conditions for classical simulations of quantum communication processes

    NASA Astrophysics Data System (ADS)

    Montina, Alberto; Wolf, Stefan

    2014-07-01

    We consider the process consisting of preparation, transmission through a quantum channel, and subsequent measurement of quantum states. The communication complexity of the channel is the minimal amount of classical communication required for classically simulating it. Recently, we reduced the computation of this quantity to a convex minimization problem with linear constraints. Every solution of the constraints provides an upper bound on the communication complexity. In this paper, we derive the dual maximization problem of the original one. The feasible points of the dual constraints, which are inequalities, give lower bounds on the communication complexity, as illustrated with an example. The optimal values of the two problems turn out to be equal (zero duality gap). By this property, we provide necessary and sufficient conditions for optimality in terms of a set of equalities and inequalities. We use these conditions and two reasonable but unproven hypotheses to derive the lower bound n ×2n -1 for a noiseless quantum channel with capacity equal to n qubits. This lower bound can have interesting consequences in the context of the recent debate on the reality of the quantum state.

  11. Performance evaluation of coherent Ising machines against classical neural networks

    NASA Astrophysics Data System (ADS)

    Haribara, Yoshitaka; Ishikawa, Hitoshi; Utsunomiya, Shoko; Aihara, Kazuyuki; Yamamoto, Yoshihisa

    2017-12-01

    The coherent Ising machine is expected to find a near-optimal solution in various combinatorial optimization problems, which has been experimentally confirmed with optical parametric oscillators and a field programmable gate array circuit. The similar mathematical models were proposed three decades ago by Hopfield et al in the context of classical neural networks. In this article, we compare the computational performance of both models.

  12. INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN

    EPA Science Inventory

    The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...

  13. The optimal location of piezoelectric actuators and sensors for vibration control of plates

    NASA Astrophysics Data System (ADS)

    Kumar, K. Ramesh; Narayanan, S.

    2007-12-01

    This paper considers the optimal placement of collocated piezoelectric actuator-sensor pairs on a thin plate using a model-based linear quadratic regulator (LQR) controller. LQR performance is taken as objective for finding the optimal location of sensor-actuator pairs. The problem is formulated using the finite element method (FEM) as multi-input-multi-output (MIMO) model control. The discrete optimal sensor and actuator location problem is formulated in the framework of a zero-one optimization problem. A genetic algorithm (GA) is used to solve the zero-one optimization problem. Different classical control strategies like direct proportional feedback, constant-gain negative velocity feedback and the LQR optimal control scheme are applied to study the control effectiveness.

  14. Generalized bipartite quantum state discrimination problems with sequential measurements

    NASA Astrophysics Data System (ADS)

    Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki

    2018-02-01

    We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.

  15. Dynamic Programming and Graph Algorithms in Computer Vision*

    PubMed Central

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  16. Multitrace/singletrace formulations and Domain Decomposition Methods for the solution of Helmholtz transmission problems for bounded composite scatterers

    NASA Astrophysics Data System (ADS)

    Jerez-Hanckes, Carlos; Pérez-Arancibia, Carlos; Turc, Catalin

    2017-12-01

    We present Nyström discretizations of multitrace/singletrace formulations and non-overlapping Domain Decomposition Methods (DDM) for the solution of Helmholtz transmission problems for bounded composite scatterers with piecewise constant material properties. We investigate the performance of DDM with both classical Robin and optimized transmission boundary conditions. The optimized transmission boundary conditions incorporate square root Fourier multiplier approximations of Dirichlet to Neumann operators. While the multitrace/singletrace formulations as well as the DDM that use classical Robin transmission conditions are not particularly well suited for Krylov subspace iterative solutions of high-contrast high-frequency Helmholtz transmission problems, we provide ample numerical evidence that DDM with optimized transmission conditions constitute efficient computational alternatives for these type of applications. In the case of large numbers of subdomains with different material properties, we show that the associated DDM linear system can be efficiently solved via hierarchical Schur complements elimination.

  17. Layout optimization with algebraic multigrid methods

    NASA Technical Reports Server (NTRS)

    Regler, Hans; Ruede, Ulrich

    1993-01-01

    Finding the optimal position for the individual cells (also called functional modules) on the chip surface is an important and difficult step in the design of integrated circuits. This paper deals with the problem of relative placement, that is the minimization of a quadratic functional with a large, sparse, positive definite system matrix. The basic optimization problem must be augmented by constraints to inhibit solutions where cells overlap. Besides classical iterative methods, based on conjugate gradients (CG), we show that algebraic multigrid methods (AMG) provide an interesting alternative. For moderately sized examples with about 10000 cells, AMG is already competitive with CG and is expected to be superior for larger problems. Besides the classical 'multiplicative' AMG algorithm where the levels are visited sequentially, we propose an 'additive' variant of AMG where levels may be treated in parallel and that is suitable as a preconditioner in the CG algorithm.

  18. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  19. Refined genetic algorithm -- Economic dispatch example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheble, G.B.; Brittig, K.

    1995-02-01

    A genetic-based algorithm is used to solve an economic dispatch (ED) problem. The algorithm utilizes payoff information of perspective solutions to evaluate optimality. Thus, the constraints of classical LaGrangian techniques on unit curves are eliminated. Using an economic dispatch problem as a basis for comparison, several different techniques which enhance program efficiency and accuracy, such as mutation prediction, elitism, interval approximation and penalty factors, are explored. Two unique genetic algorithms are also compared. The results are verified for a sample problem using a classical technique.

  20. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  1. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  2. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  3. Control optimization of a lifting body entry problem by an improved and a modified method of perturbation function. Ph.D. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Garcia, F., Jr.

    1974-01-01

    A study of the solution problem of a complex entry optimization was studied. The problem was transformed into a two-point boundary value problem by using classical calculus of variation methods. Two perturbation methods were devised. These methods attempted to desensitize the contingency of the solution of this type of problem on the required initial co-state estimates. Also numerical results are presented for the optimal solution resulting from a number of different initial co-states estimates. The perturbation methods were compared. It is found that they are an improvement over existing methods.

  4. Configuration optimization of space structures

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos; Crivelli, Luis A.; Vandenbelt, David

    1991-01-01

    The objective is to develop a computer aid for the conceptual/initial design of aerospace structures, allowing configurations and shape to be apriori design variables. The topics are presented in viewgraph form and include the following: Kikuchi's homogenization method; a classical shape design problem; homogenization method steps; a 3D mechanical component design example; forming a homogenized finite element; a 2D optimization problem; treatment of volume inequality constraint; algorithms for the volume inequality constraint; object function derivatives--taking advantage of design locality; stiffness variations; variations of potential; and schematics of the optimization problem.

  5. Solving quantum optimal control problems using Clebsch variables and Lin constraints

    NASA Astrophysics Data System (ADS)

    Delgado-Téllez, M.; Ibort, A.; Rodríguez de la Peña, T.

    2018-01-01

    Clebsch variables (and Lin constraints) are applied to the study of a class of optimal control problems for affine-controlled quantum systems. The optimal control problem will be modelled with controls defined on an auxiliary space where the dynamical group of the system acts freely. The reciprocity between both theories: the classical theory defined by the objective functional and the quantum system, is established by using a suitable version of Lagrange’s multipliers theorem and a geometrical interpretation of the constraints of the system as defining a subspace of horizontal curves in an associated bundle. It is shown how the solutions of the variational problem defined by the objective functional determine solutions of the quantum problem. Then a new way of obtaining explicit solutions for a family of optimal control problems for affine-controlled quantum systems (finite or infinite dimensional) is obtained. One of its main advantages, is the the use of Clebsch variables allows to compute such solutions from solutions of invariant problems that can often be computed explicitly. This procedure can be presented as an algorithm that can be applied to a large class of systems. Finally, some simple examples, spin control, a simple quantum Hamiltonian with an ‘Elroy beanie’ type classical model and a controlled one-dimensional quantum harmonic oscillator, illustrating the main features of the theory, will be discussed.

  6. Analogue of the Kelley condition for optimal systems with retarded control

    NASA Astrophysics Data System (ADS)

    Mardanov, Misir J.; Melikov, Telman K.

    2017-07-01

    In this paper, we consider an optimal control problem with retarded control and study a larger class of singular (in the classical sense) controls. The Kelley and equality type optimality conditions are obtained. To prove our main results, we use the Legendre polynomials as variations of control.

  7. The Soda Can Optimization Problem: Getting Close to the Real Thing

    ERIC Educational Resources Information Center

    Premadasa, Kirthi; Martin, Paul; Sprecher, Bryce; Yang, Lai; Dodge, Noah-Helen

    2016-01-01

    Optimizing the dimensions of a soda can is a classic problem that is frequently posed to freshman calculus students. However, if we only minimize the surface area subject to a fixed volume, the result is a can with a square edge-on profile, and this differs significantly from actual cans. By considering a more realistic model for the can that…

  8. Quantum approach to classical statistical mechanics.

    PubMed

    Somma, R D; Batista, C D; Ortiz, G

    2007-07-20

    We present a new approach to study the thermodynamic properties of d-dimensional classical systems by reducing the problem to the computation of ground state properties of a d-dimensional quantum model. This classical-to-quantum mapping allows us to extend the scope of standard optimization methods by unifying them under a general framework. The quantum annealing method is naturally extended to simulate classical systems at finite temperatures. We derive the rates to assure convergence to the optimal thermodynamic state using the adiabatic theorem of quantum mechanics. For simulated and quantum annealing, we obtain the asymptotic rates of T(t) approximately (pN)/(k(B)logt) and gamma(t) approximately (Nt)(-c/N), for the temperature and magnetic field, respectively. Other annealing strategies are also discussed.

  9. Practical optimal flight control system design for helicopter aircraft. Volume 2: Software user's guide

    NASA Technical Reports Server (NTRS)

    Riedel, S. A.

    1979-01-01

    A method by which modern and classical control theory techniques may be integrated in a synergistic fashion and used in the design of practical flight control systems is presented. A general procedure is developed, and several illustrative examples are included. Emphasis is placed not only on the synthesis of the design, but on the assessment of the results as well. The first step is to establish the differences, distinguishing characteristics and connections between the modern and classical control theory approaches. Ultimately, this uncovers a relationship between bandwidth goals familiar in classical control and cost function weights in the equivalent optimal system. In order to obtain a practical optimal solution, it is also necessary to formulate the problem very carefully, and each choice of state, measurement and output variable must be judiciously considered. Once design goals are established and problem formulation completed, the control system is synthesized in a straightforward manner. Three steps are involved: filter-observer solution, regulator solution, and the combination of those two into the controller. Assessment of the controller permits and examination and expansion of the synthesis results.

  10. A Numerical Comparison of Barrier and Modified Barrier Methods for Large-Scale Bound-Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Nash, Stephen G.; Polyak, R.; Sofer, Ariela

    1994-01-01

    When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.

  11. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  12. Solving Optimization Problems with Spreadsheets

    ERIC Educational Resources Information Center

    Beigie, Darin

    2017-01-01

    Spreadsheets provide a rich setting for first-year algebra students to solve problems. Individual spreadsheet cells play the role of variables, and creating algebraic expressions for a spreadsheet to perform a task allows students to achieve a glimpse of how mathematics is used to program a computer and solve problems. Classic optimization…

  13. Non linear predictive control of a LEGO mobile robot

    NASA Astrophysics Data System (ADS)

    Merabti, H.; Bouchemal, B.; Belarbi, K.; Boucherma, D.; Amouri, A.

    2014-10-01

    Metaheuristics are general purpose heuristics which have shown a great potential for the solution of difficult optimization problems. In this work, we apply the meta heuristic, namely particle swarm optimization, PSO, for the solution of the optimization problem arising in NLMPC. This algorithm is easy to code and may be considered as alternatives for the more classical solution procedures. The PSO- NLMPC is applied to control a mobile robot for the tracking trajectory and obstacles avoidance. Experimental results show the strength of this approach.

  14. Sensitivity analysis, approximate analysis, and design optimization for internal and external viscous flows

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.

    1991-01-01

    A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.

  15. Simultaneous optimization of loading pattern and burnable poison placement for PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alim, F.; Ivanov, K.; Yilmaz, S.

    2006-07-01

    To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less

  16. Asset Allocation and Optimal Contract for Delegated Portfolio Management

    NASA Astrophysics Data System (ADS)

    Liu, Jingjun; Liang, Jianfeng

    This article studies the portfolio selection and the contracting problems between an individual investor and a professional portfolio manager in a discrete-time principal-agent framework. Portfolio selection and optimal contracts are obtained in closed form. The optimal contract was composed with the fixed fee, the cost, and the fraction of excess expected return. The optimal portfolio is similar to the classical two-fund separation theorem.

  17. Robust quantum optimizer with full connectivity.

    PubMed

    Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P

    2017-04-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.

  18. Extremal Optimization: Methods Derived from Co-Evolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boettcher, S.; Percus, A.G.

    1999-07-13

    We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less

  19. Solving Optimization Problems with Dynamic Geometry Software: The Airport Problem

    ERIC Educational Resources Information Center

    Contreras, José

    2014-01-01

    This paper describes how the author's students (in-service and pre-service secondary mathematics teachers) enrolled in college geometry courses use the Geometers' Sketchpad (GSP) to gain insight to formulate, confirm, test, and refine conjectures to solve the classical airport problem for triangles. The students are then provided with strategic…

  20. Functionality limit of classical simulated annealing

    NASA Astrophysics Data System (ADS)

    Hasegawa, M.

    2015-09-01

    By analyzing the system dynamics in the landscape paradigm, optimization function of classical simulated annealing is reviewed on the random traveling salesman problems. The properly functioning region of the algorithm is experimentally determined in the size-time plane and the influence of its boundary on the scalability test is examined in the standard framework of this method. From both results, an empirical choice of temperature length is plausibly explained as a minimum requirement that the algorithm maintains its scalability within its functionality limit. The study exemplifies the applicability of computational physics analysis to the optimization algorithm research.

  1. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  2. An alternative approach for neural network evolution with a genetic algorithm: crossover by combinatorial optimization.

    PubMed

    García-Pedrajas, Nicolás; Ortiz-Boyer, Domingo; Hervás-Martínez, César

    2006-05-01

    In this work we present a new approach to crossover operator in the genetic evolution of neural networks. The most widely used evolutionary computation paradigm for neural network evolution is evolutionary programming. This paradigm is usually preferred due to the problems caused by the application of crossover to neural network evolution. However, crossover is the most innovative operator within the field of evolutionary computation. One of the most notorious problems with the application of crossover to neural networks is known as the permutation problem. This problem occurs due to the fact that the same network can be represented in a genetic coding by many different codifications. Our approach modifies the standard crossover operator taking into account the special features of the individuals to be mated. We present a new model for mating individuals that considers the structure of the hidden layer and redefines the crossover operator. As each hidden node represents a non-linear projection of the input variables, we approach the crossover as a problem on combinatorial optimization. We can formulate the problem as the extraction of a subset of near-optimal projections to create the hidden layer of the new network. This new approach is compared to a classical crossover in 25 real-world problems with an excellent performance. Moreover, the networks obtained are much smaller than those obtained with classical crossover operator.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guta, Madalin; Matsumoto, Keiji; Quantum Computation and Information Project, JST, Hongo 5-28-3, Bunkyo-ku, Tokyo 113-0033

    We construct the optimal one to two cloning transformation for the family of displaced thermal equilibrium states of a harmonic oscillator, with a fixed and known temperature. The transformation is Gaussian and it is optimal with respect to the figure of merit based on the joint output state and norm distance. The proof of the result is based on the equivalence between the optimal cloning problem and that of optimal amplification of Gaussian states which is then reduced to an optimization problem for diagonal states of a quantum oscillator. A key concept in finding the optimum is that of stochasticmore » ordering which plays a similar role in the purely classical problem of Gaussian cloning. The result is then extended to the case of n to m cloning of mixed Gaussian states.« less

  4. Robust optimization modelling with applications to industry and environmental problems

    NASA Astrophysics Data System (ADS)

    Chaerani, Diah; Dewanto, Stanley P.; Lesmana, Eman

    2017-10-01

    Robust Optimization (RO) modeling is one of the existing methodology for handling data uncertainty in optimization problem. The main challenge in this RO methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem. Due to its definition the robust counterpart highly depends on how we choose the uncertainty set. As a consequence we can meet this challenge only if this set is chosen in a suitable way. The development on RO grows fast, since 2004, a new approach of RO called Adjustable Robust Optimization (ARO) is introduced to handle uncertain problems when the decision variables must be decided as a ”wait and see” decision variables. Different than the classic Robust Optimization (RO) that models decision variables as ”here and now”. In ARO, the uncertain problems can be considered as a multistage decision problem, thus decision variables involved are now become the wait and see decision variables. In this paper we present the applications of both RO and ARO. We present briefly all results to strengthen the importance of RO and ARO in many real life problems.

  5. A numerical method for solving a nonlinear 2-D optimal control problem with the classical diffusion equation

    NASA Astrophysics Data System (ADS)

    Mamehrashi, K.; Yousefi, S. A.

    2017-02-01

    This paper presents a numerical solution for solving a nonlinear 2-D optimal control problem (2DOP). The performance index of a nonlinear 2DOP is described with a state and a control function. Furthermore, dynamic constraint of the system is given by a classical diffusion equation. It is preferred to use the Ritz method for finding the numerical solution of the problem. The method is based upon the Legendre polynomial basis. By using this method, the given optimisation nonlinear 2DOP reduces to the problem of solving a system of algebraic equations. The benefit of the method is that it provides greater flexibility in which the given initial and boundary conditions of the problem are imposed. Moreover, compared with the eigenfunction method, the satisfactory results are obtained only in a small number of polynomials order. This numerical approach is applicable and effective for such a kind of nonlinear 2DOP. The convergence of the method is extensively discussed and finally two illustrative examples are included to observe the validity and applicability of the new technique developed in the current work.

  6. Tunneling and speedup in quantum optimization for permutation-symmetric problems

    DOE PAGES

    Muthukrishnan, Siddharth; Albash, Tameem; Lidar, Daniel A.

    2016-07-21

    Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final costmore » function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Lastly, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.« less

  7. Tunneling and speedup in quantum optimization for permutation-symmetric problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muthukrishnan, Siddharth; Albash, Tameem; Lidar, Daniel A.

    Tunneling is often claimed to be the key mechanism underlying possible speedups in quantum optimization via quantum annealing (QA), especially for problems featuring a cost function with tall and thin barriers. We present and analyze several counterexamples from the class of perturbed Hamming weight optimization problems with qubit permutation symmetry. We first show that, for these problems, the adiabatic dynamics that make tunneling possible should be understood not in terms of the cost function but rather the semiclassical potential arising from the spin-coherent path-integral formalism. We then provide an example where the shape of the barrier in the final costmore » function is short and wide, which might suggest no quantum advantage for QA, yet where tunneling renders QA superior to simulated annealing in the adiabatic regime. However, the adiabatic dynamics turn out not be optimal. Instead, an evolution involving a sequence of diabatic transitions through many avoided-level crossings, involving no tunneling, is optimal and outperforms adiabatic QA. We show that this phenomenon of speedup by diabatic transitions is not unique to this example, and we provide an example where it provides an exponential speedup over adiabatic QA. In yet another twist, we show that a classical algorithm, spin-vector dynamics, is at least as efficient as diabatic QA. Lastly, in a different example with a convex cost function, the diabatic transitions result in a speedup relative to both adiabatic QA with tunneling and classical spin-vector dynamics.« less

  8. Quantum speedup in solving the maximal-clique problem

    NASA Astrophysics Data System (ADS)

    Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang

    2018-03-01

    The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.

  9. Fireworks algorithm for mean-VaR/CVaR models

    NASA Astrophysics Data System (ADS)

    Zhang, Tingting; Liu, Zhifeng

    2017-10-01

    Intelligent algorithms have been widely applied to portfolio optimization problems. In this paper, we introduce a novel intelligent algorithm, named fireworks algorithm, to solve the mean-VaR/CVaR model for the first time. The results show that, compared with the classical genetic algorithm, fireworks algorithm not only improves the optimization accuracy and the optimization speed, but also makes the optimal solution more stable. We repeat our experiments at different confidence levels and different degrees of risk aversion, and the results are robust. It suggests that fireworks algorithm has more advantages than genetic algorithm in solving the portfolio optimization problem, and it is feasible and promising to apply it into this field.

  10. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  11. A robust component mode synthesis method for stochastic damped vibroacoustics

    NASA Astrophysics Data System (ADS)

    Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine

    2010-01-01

    In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.

  12. Numerical Leak Detection in a Pipeline Network of Complex Structure with Unsteady Flow

    NASA Astrophysics Data System (ADS)

    Aida-zade, K. R.; Ashrafova, E. R.

    2017-12-01

    An inverse problem for a pipeline network of complex loopback structure is solved numerically. The problem is to determine the locations and amounts of leaks from unsteady flow characteristics measured at some pipeline points. The features of the problem include impulse functions involved in a system of hyperbolic differential equations, the absence of classical initial conditions, and boundary conditions specified as nonseparated relations between the states at the endpoints of adjacent pipeline segments. The problem is reduced to a parametric optimal control problem without initial conditions, but with nonseparated boundary conditions. The latter problem is solved by applying first-order optimization methods. Results of numerical experiments are presented.

  13. Quasivelocities and Optimal Control for underactuated Mechanical Systems

    NASA Astrophysics Data System (ADS)

    Colombo, L.; de Diego, D. Martín

    2010-07-01

    This paper is concerned with the application of the theory of quasivelocities for optimal control for underactuated mechanical systems. Using this theory, we convert the original problem in a variational second-order lagrangian system subjected to constraints. The equations of motion are geometrically derived using an adaptation of the classical Skinner and Rusk formalism.

  14. Quasivelocities and Optimal Control for underactuated Mechanical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colombo, L.; Martin de Diego, D.

    2010-07-28

    This paper is concerned with the application of the theory of quasivelocities for optimal control for underactuated mechanical systems. Using this theory, we convert the original problem in a variational second-order lagrangian system subjected to constraints. The equations of motion are geometrically derived using an adaptation of the classical Skinner and Rusk formalism.

  15. Robust quantum optimizer with full connectivity

    PubMed Central

    Nigg, Simon E.; Lörch, Niels; Tiwari, Rakesh P.

    2017-01-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation. PMID:28435880

  16. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  17. A framework for simultaneous aerodynamic design optimization in the presence of chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Günther, Stefanie, E-mail: stefanie.guenther@scicomp.uni-kl.de; Gauger, Nicolas R.; Wang, Qiqi

    Integrating existing solvers for unsteady partial differential equations into a simultaneous optimization method is challenging due to the forward-in-time information propagation of classical time-stepping methods. This paper applies the simultaneous single-step one-shot optimization method to a reformulated unsteady constraint that allows for both forward- and backward-in-time information propagation. Especially in the presence of chaotic and turbulent flow, solving the initial value problem simultaneously with the optimization problem often scales poorly with the time domain length. The new formulation relaxes the initial condition and instead solves a least squares problem for the discrete partial differential equations. This enables efficient one-shot optimizationmore » that is independent of the time domain length, even in the presence of chaos.« less

  18. Generalized Pattern Search methods for a class of nonsmooth optimization problems with structure

    NASA Astrophysics Data System (ADS)

    Bogani, C.; Gasparo, M. G.; Papini, A.

    2009-07-01

    We propose a Generalized Pattern Search (GPS) method to solve a class of nonsmooth minimization problems, where the set of nondifferentiability is included in the union of known hyperplanes and, therefore, is highly structured. Both unconstrained and linearly constrained problems are considered. At each iteration the set of poll directions is enforced to conform to the geometry of both the nondifferentiability set and the boundary of the feasible region, near the current iterate. This is the key issue to guarantee the convergence of certain subsequences of iterates to points which satisfy first-order optimality conditions. Numerical experiments on some classical problems validate the method.

  19. On the theory of singular optimal controls in dynamic systems with control delay

    NASA Astrophysics Data System (ADS)

    Mardanov, M. J.; Melikov, T. K.

    2017-05-01

    An optimal control problem with a control delay is considered, and a more broad class of singular (in classical sense) controls is investigated. Various sequences of necessary conditions for the optimality of singular controls in recurrent form are obtained. These optimality conditions include analogues of the Kelley, Kopp-Moyer, R. Gabasov, and equality-type conditions. In the proof of the main results, the variation of the control is defined using Legendre polynomials.

  20. Optimally stopped variational quantum algorithms

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  1. Solving NP-Hard Problems with Physarum-Based Ant Colony System.

    PubMed

    Liu, Yuxin; Gao, Chao; Zhang, Zili; Lu, Yuxiao; Chen, Shi; Liang, Mingxin; Tao, Li

    2017-01-01

    NP-hard problems exist in many real world applications. Ant colony optimization (ACO) algorithms can provide approximate solutions for those NP-hard problems, but the performance of ACO algorithms is significantly reduced due to premature convergence and weak robustness, etc. With these observations in mind, this paper proposes a Physarum-based pheromone matrix optimization strategy in ant colony system (ACS) for solving NP-hard problems such as traveling salesman problem (TSP) and 0/1 knapsack problem (0/1 KP). In the Physarum-inspired mathematical model, one of the unique characteristics is that critical tubes can be reserved in the process of network evolution. The optimized updating strategy employs the unique feature and accelerates the positive feedback process in ACS, which contributes to the quick convergence of the optimal solution. Some experiments were conducted using both benchmark and real datasets. The experimental results show that the optimized ACS outperforms other meta-heuristic algorithms in accuracy and robustness for solving TSPs. Meanwhile, the convergence rate and robustness for solving 0/1 KPs are better than those of classical ACS.

  2. Dynamic Grover search: applications in recommendation systems and optimization problems

    NASA Astrophysics Data System (ADS)

    Chakrabarty, Indranil; Khan, Shahzor; Singh, Vanshdeep

    2017-06-01

    In the recent years, we have seen that Grover search algorithm (Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996) by using quantum parallelism has revolutionized the field of solving huge class of NP problems in comparisons to classical systems. In this work, we explore the idea of extending Grover search algorithm to approximate algorithms. Here we try to analyze the applicability of Grover search to process an unstructured database with a dynamic selection function in contrast to the static selection function used in the original work (Grover in Proceedings, 28th annual ACM symposium on the theory of computing, pp. 212-219, 1996). We show that this alteration facilitates us to extend the application of Grover search to the field of randomized search algorithms. Further, we use the dynamic Grover search algorithm to define the goals for a recommendation system based on which we propose a recommendation algorithm which uses binomial similarity distribution space giving us a quadratic speedup over traditional classical unstructured recommendation systems. Finally, we see how dynamic Grover search can be used to tackle a wide range of optimization problems where we improve complexity over existing optimization algorithms.

  3. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.

    2016-10-01

    Meta-heuristic algorithms are problem-solving methods which try to find good-enough solutions to very hard optimization problems, at a reasonable computation time, where classical approaches fail, or cannot even been applied. Many existing meta-heuristics approaches are nature-inspired techniques, which work by simulating or modeling different natural processes in a computer. Historically, many of the most successful meta-heuristic approaches have had a biological inspiration, such as evolutionary computation or swarm intelligence paradigms, but in the last few years new approaches based on nonlinear physics processes modeling have been proposed and applied with success. Non-linear physics processes, modeled as optimization algorithms, are able to produce completely new search procedures, with extremely effective exploration capabilities in many cases, which are able to outperform existing optimization approaches. In this paper we review the most important optimization algorithms based on nonlinear physics, how they have been constructed from specific modeling of a real phenomena, and also their novelty in terms of comparison with alternative existing algorithms for optimization. We first review important concepts on optimization problems, search spaces and problems' difficulty. Then, the usefulness of heuristics and meta-heuristics approaches to face hard optimization problems is introduced, and some of the main existing classical versions of these algorithms are reviewed. The mathematical framework of different nonlinear physics processes is then introduced as a preparatory step to review in detail the most important meta-heuristics based on them. A discussion on the novelty of these approaches, their main computational implementation and design issues, and the evaluation of a novel meta-heuristic based on Strange Attractors mutation will be carried out to complete the review of these techniques. We also describe some of the most important application areas, in broad sense, of meta-heuristics, and describe free-accessible software frameworks which can be used to make easier the implementation of these algorithms.

  4. One- and two-objective approaches to an area-constrained habitat reserve site selection problem

    Treesearch

    Stephanie Snyder; Charles ReVelle; Robert Haight

    2004-01-01

    We compare several ways to model a habitat reserve site selection problem in which an upper bound on the total area of the selected sites is included. The models are cast as optimization coverage models drawn from the location science literature. Classic covering problems typically include a constraint on the number of sites that can be selected. If potential reserve...

  5. The Bayesian Learning Automaton — Empirical Evaluation with Two-Armed Bernoulli Bandit Problems

    NASA Astrophysics Data System (ADS)

    Granmo, Ole-Christoffer

    The two-armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information.

  6. Airfoil Design and Optimization by the One-Shot Method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1995-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  7. Airfoil optimization by the one-shot method

    NASA Technical Reports Server (NTRS)

    Kuruvila, G.; Taasan, Shlomo; Salas, M. D.

    1994-01-01

    An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (Governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Language multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.

  8. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  9. Preliminary Development of an Object-Oriented Optimization Tool

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi

    2011-01-01

    The National Aeronautics and Space Administration Dryden Flight Research Center has developed a FORTRAN-based object-oriented optimization (O3) tool that leverages existing tools and practices and allows easy integration and adoption of new state-of-the-art software. The object-oriented framework can integrate the analysis codes for multiple disciplines, as opposed to relying on one code to perform analysis for all disciplines. Optimization can thus take place within each discipline module, or in a loop between the central executive module and the discipline modules, or both. Six sample optimization problems are presented. The first four sample problems are based on simple mathematical equations; the fifth and sixth problems consider a three-bar truss, which is a classical example in structural synthesis. Instructions for preparing input data for the O3 tool are presented.

  10. Computational study of engine external aerodynamics as a part of multidisciplinary optimization procedure

    NASA Astrophysics Data System (ADS)

    Savelyev, Andrey; Anisimov, Kirill; Kazhan, Egor; Kursakov, Innocentiy; Lysenkov, Alexandr

    2016-10-01

    The paper is devoted to the development of methodology to optimize external aerodynamics of the engine. Optimization procedure is based on numerical solution of the Reynolds-averaged Navier-Stokes equations. As a method of optimization the surrogate based method is used. As a test problem optimal shape design of turbofan nacelle is considered. The results of the first stage, which investigates classic airplane configuration with engine located under the wing, are presented. Described optimization procedure is considered in the context of multidisciplinary optimization of the 3rd generation, developed in the project AGILE.

  11. Optimal trajectories for aeroassisted orbital transfer

    NASA Technical Reports Server (NTRS)

    Miele, A.; Venkataraman, P.

    1983-01-01

    Consideration is given to classical and minimax problems involved in aeroassisted transfer from high earth orbit (HEO) to low earth orbit (LEO). The transfer is restricted to coplanar operation, with trajectory control effected by means of lift modulation. The performance of the maneuver is indexed to the energy expenditure or, alternatively, the time integral of the heating rate. Firist-order optimality conditions are defined for the classical approach, as are a sequential gradient-restoration algorithm and a combined gradient-restoration algorithm. Minimization techniques are presented for the aeroassisted transfer energy consumption and time-delay integral of the heating rate, as well as minimization of the pressure. It is shown that the eigenvalues of the Jacobian matrix of the differential system is both stiff and unstable, implying that the sequential gradient restoration algorithm in its present version is unsuitable. A new method, involving a multipoint approach to the two-poing boundary value problem, is recommended.

  12. Gradient-based Optimization for Poroelastic and Viscoelastic MR Elastography

    PubMed Central

    Tan, Likun; McGarry, Matthew D.J.; Van Houten, Elijah E.W.; Ji, Ming; Solamen, Ligin; Weaver, John B.

    2017-01-01

    We describe an efficient gradient computation for solving inverse problems arising in magnetic resonance elastography (MRE). The algorithm can be considered as a generalized ‘adjoint method’ based on a Lagrangian formulation. One requirement for the classic adjoint method is assurance of the self-adjoint property of the stiffness matrix in the elasticity problem. In this paper, we show this property is no longer a necessary condition in our algorithm, but the computational performance can be as efficient as the classic method, which involves only two forward solutions and is independent of the number of parameters to be estimated. The algorithm is developed and implemented in material property reconstructions using poroelastic and viscoelastic modeling. Various gradient- and Hessian-based optimization techniques have been tested on simulation, phantom and in vivo brain data. The numerical results show the feasibility and the efficiency of the proposed scheme for gradient calculation. PMID:27608454

  13. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  14. Guidance and flight control law development for hypersonic vehicles

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Markopoulos, N.

    1993-01-01

    During the third reporting period our efforts were focused on a reformulation of the optimal control problem involving active state-variable inequality constraints. In the reformulated problem the optimization is carried out not with respect to all controllers, but only with respect to asymptotic controllers leading to the state constraint boundary. Intimately connected with the traditional formulation is the fact that when the reduced solution for such problems lies on a state constraint boundary, the corresponding boundary layer transitions are of finite time in the stretched time scale. Thus, it has been impossible so far to apply the classical asymptotic boundary layer theory to such problems. Moreover, the traditional formulation leads to optimal controllers that are one-sided, that is, they break down when a disturbance throws the system on the prohibited side of the state constraint boundary.

  15. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    NASA Astrophysics Data System (ADS)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  16. Optimal solar sail planetocentric trajectories

    NASA Technical Reports Server (NTRS)

    Sackett, L. L.

    1977-01-01

    The analysis of solar sail planetocentric optimal trajectory problem is described. A computer program was produced to calculate optimal trajectories for a limited performance analysis. A square sail model is included and some consideration is given to a heliogyro sail model. Orbit to a subescape point and orbit to orbit transfer are considered. Trajectories about the four inner planets can be calculated and shadowing, oblateness, and solar motion may be included. Equinoctial orbital elements are used to avoid the classical singularities, and the method of averaging is applied to increase computational speed. Solution of the two-point boundary value problem which arises from the application of optimization theory is accomplished with a Newton procedure. Time optimal trajectories are emphasized, but a penalty function has been considered to prevent trajectories which intersect a planet's surface.

  17. Experimental demonstration of a quantum annealing algorithm for the traveling salesman problem in a nuclear-magnetic-resonance quantum simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Hongwei; High Magnetic Field Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031; Kong Xi

    The method of quantum annealing (QA) is a promising way for solving many optimization problems in both classical and quantum information theory. The main advantage of this approach, compared with the gate model, is the robustness of the operations against errors originated from both external controls and the environment. In this work, we succeed in demonstrating experimentally an application of the method of QA to a simplified version of the traveling salesman problem by simulating the corresponding Schroedinger evolution with a NMR quantum simulator. The experimental results unambiguously yielded the optimal traveling route, in good agreement with the theoretical prediction.

  18. Optimal sequential measurements for bipartite state discrimination

    NASA Astrophysics Data System (ADS)

    Croke, Sarah; Barnett, Stephen M.; Weir, Graeme

    2017-05-01

    State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.

  19. Classical Optimal Control for Energy Minimization Based On Diffeomorphic Modulation under Observable-Response-Preserving Homotopy.

    PubMed

    Soley, Micheline B; Markmann, Andreas; Batista, Victor S

    2018-06-12

    We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.

  20. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  1. Optimal processor assignment for pipeline computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Simha, Rahul; Choudhury, Alok N.; Narahari, Bhagirath

    1991-01-01

    The availability of large scale multitasked parallel architectures introduces the following processor assignment problem for pipelined computations. Given a set of tasks and their precedence constraints, along with their experimentally determined individual responses times for different processor sizes, find an assignment of processor to tasks. Two objectives are of interest: minimal response given a throughput requirement, and maximal throughput given a response time requirement. These assignment problems differ considerably from the classical mapping problem in which several tasks share a processor; instead, it is assumed that a large number of processors are to be assigned to a relatively small number of tasks. Efficient assignment algorithms were developed for different classes of task structures. For a p processor system and a series parallel precedence graph with n constituent tasks, an O(np2) algorithm is provided that finds the optimal assignment for the response time optimization problem; it was found that the assignment optimizing the constrained throughput in O(np2log p) time. Special cases of linear, independent, and tree graphs are also considered.

  2. Reaching Agreement in Quantum Hybrid Networks.

    PubMed

    Shi, Guodong; Li, Bo; Miao, Zibo; Dower, Peter M; James, Matthew R

    2017-07-20

    We consider a basic quantum hybrid network model consisting of a number of nodes each holding a qubit, for which the aim is to drive the network to a consensus in the sense that all qubits reach a common state. Projective measurements are applied serving as control means, and the measurement results are exchanged among the nodes via classical communication channels. In this way the quantum-opeartion/classical-communication nature of hybrid quantum networks is captured, although coherent states and joint operations are not taken into consideration in order to facilitate a clear and explicit analysis. We show how to carry out centralized optimal path planning for this network with all-to-all classical communications, in which case the problem becomes a stochastic optimal control problem with a continuous action space. To overcome the computation and communication obstacles facing the centralized solutions, we also develop a distributed Pairwise Qubit Projection (PQP) algorithm, where pairs of nodes meet at a given time and respectively perform measurements at their geometric average. We show that the qubit states are driven to a consensus almost surely along the proposed PQP algorithm, and that the expected qubit density operators converge to the average of the network's initial values.

  3. Aerodynamic design of axisymmetric hypersonic wind-tunnel nozzles using least-squares/parabolized Navier-Stokes procedure

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1992-01-01

    A new procedure unifying the best of present classical design practices, CFD and optimization procedures, is demonstrated for designing the aerodynamic lines of hypersonic wind tunnel nozzles. This procedure can be employed to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been demonstrated to break down. Advantages of this procedure allow full utilization of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, may be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure.

  4. Exponential instability in the fractional Calderón problem

    NASA Astrophysics Data System (ADS)

    Rüland, Angkana; Salo, Mikko

    2018-04-01

    In this paper we prove the exponential instability of the fractional Calderón problem and thus prove the optimality of the logarithmic stability estimate from Rüland and Salo (2017 arXiv:1708.06294). In order to infer this result, we follow the strategy introduced by Mandache in (2001 Inverse Problems 17 1435) for the standard Calderón problem. Here we exploit a close relation between the fractional Calderón problem and the classical Poisson operator. Moreover, using the construction of a suitable orthonormal basis, we also prove (almost) optimality of the Runge approximation result for the fractional Laplacian, which was derived in Rüland and Salo (2017 arXiv:1708.06294). Finally, in one dimension, we show a close relation between the fractional Calderón problem and the truncated Hilbert transform.

  5. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    NASA Astrophysics Data System (ADS)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  6. Approximation of Nash equilibria and the network community structure detection problem

    PubMed Central

    2017-01-01

    Game theory based methods designed to solve the problem of community structure detection in complex networks have emerged in recent years as an alternative to classical and optimization based approaches. The Mixed Nash Extremal Optimization uses a generative relation for the characterization of Nash equilibria to identify the community structure of a network by converting the problem into a non-cooperative game. This paper proposes a method to enhance this algorithm by reducing the number of payoff function evaluations. Numerical experiments performed on synthetic and real-world networks show that this approach is efficient, with results better or just as good as other state-of-the-art methods. PMID:28467496

  7. Finding Maximum Cliques on the D-Wave Quantum Annealer

    DOE PAGES

    Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg; ...

    2018-05-03

    This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less

  8. Finding Maximum Cliques on the D-Wave Quantum Annealer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chapuis, Guillaume; Djidjev, Hristo; Hahn, Georg

    This work assesses the performance of the D-Wave 2X (DW) quantum annealer for finding a maximum clique in a graph, one of the most fundamental and important NP-hard problems. Because the size of the largest graphs DW can directly solve is quite small (usually around 45 vertices), we also consider decomposition algorithms intended for larger graphs and analyze their performance. For smaller graphs that fit DW, we provide formulations of the maximum clique problem as a quadratic unconstrained binary optimization (QUBO) problem, which is one of the two input types (together with the Ising model) acceptable by the machine, andmore » compare several quantum implementations to current classical algorithms such as simulated annealing, Gurobi, and third-party clique finding heuristics. We further estimate the contributions of the quantum phase of the quantum annealer and the classical post-processing phase typically used to enhance each solution returned by DW. We demonstrate that on random graphs that fit DW, no quantum speedup can be observed compared with the classical algorithms. On the other hand, for instances specifically designed to fit well the DW qubit interconnection network, we observe substantial speed-ups in computing time over classical approaches.« less

  9. Quantum-Inspired Maximizer

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).

  10. Compiling Planning into Quantum Optimization Problems: A Comparative Study

    DTIC Science & Technology

    2015-06-07

    and Sipser, M. 2000. Quantum computation by adiabatic evolution. arXiv:quant- ph/0001106. Fikes, R. E., and Nilsson, N. J. 1972. STRIPS: A new...become available: quantum annealing. Quantum annealing is one of the most accessible quantum algorithms for a computer sci- ence audience not versed...in quantum computing because of its close ties to classical optimization algorithms such as simulated annealing. While large-scale universal quantum

  11. Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.

    PubMed

    Lennartsson, Jan; Lindberg, Carl

    2015-01-01

    To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.

  12. Complexity of the Quantum Adiabatic Algorithm

    NASA Astrophysics Data System (ADS)

    Hen, Itay

    2013-03-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorihms. Here, we discuss several aspects of the quantum adiabatic algorithm: We analyze the efficiency of the algorithm on several ``hard'' (NP) computational problems. Studying the size dependence of the typical minimum energy gap of the Hamiltonians of these problems using quantum Monte Carlo methods, we find that while for most problems the minimum gap decreases exponentially with the size of the problem, indicating that the QAA is not more efficient than existing classical search algorithms, for other problems there is evidence to suggest that the gap may be polynomial near the phase transition. We also discuss applications of the QAA to ``real life'' problems and how they can be implemented on currently available (albeit prototypical) quantum hardware such as ``D-Wave One'', that impose serious restrictions as to which type of problems may be tested. Finally, we discuss different approaches to find improved implementations of the algorithm such as local adiabatic evolution, adaptive methods, local search in Hamiltonian space and others.

  13. Solving an inverse eigenvalue problem with triple constraints on eigenvalues, singular values, and diagonal elements

    NASA Astrophysics Data System (ADS)

    Wu, Sheng-Jhih; Chu, Moody T.

    2017-08-01

    An inverse eigenvalue problem usually entails two constraints, one conditioned upon the spectrum and the other on the structure. This paper investigates the problem where triple constraints of eigenvalues, singular values, and diagonal entries are imposed simultaneously. An approach combining an eclectic mix of skills from differential geometry, optimization theory, and analytic gradient flow is employed to prove the solvability of such a problem. The result generalizes the classical Mirsky, Sing-Thompson, and Weyl-Horn theorems concerning the respective majorization relationships between any two of the arrays of main diagonal entries, eigenvalues, and singular values. The existence theory fills a gap in the classical matrix theory. The problem might find applications in wireless communication and quantum information science. The technique employed can be implemented as a first-step numerical method for constructing the matrix. With slight modification, the approach might be used to explore similar types of inverse problems where the prescribed entries are at general locations.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demkowicz-Dobrzanski, Rafal; Lewenstein, Maciej; Institut fuer Theoretische Physik, Universitaet Hannover, D-30167 Hannover

    We solve the problem of the optimal cloning of pure entangled two-qubit states with a fixed degree of entanglement using local operations and classical communication. We show that, amazingly, classical communication between the parties can improve the fidelity of local cloning if and only if the initial entanglement is higher than a certain critical value. It is completely useless for weakly entangled states. We also show that bound entangled states with positive partial transpose are not useful as a resource to improve the best local cloning fidelity.

  15. Solving multi-objective optimization problems in conservation with the reference point method

    PubMed Central

    Dujardin, Yann; Chadès, Iadine

    2018-01-01

    Managing the biodiversity extinction crisis requires wise decision-making processes able to account for the limited resources available. In most decision problems in conservation biology, several conflicting objectives have to be taken into account. Most methods used in conservation either provide suboptimal solutions or use strong assumptions about the decision-maker’s preferences. Our paper reviews some of the existing approaches to solve multi-objective decision problems and presents new multi-objective linear programming formulations of two multi-objective optimization problems in conservation, allowing the use of a reference point approach. Reference point approaches solve multi-objective optimization problems by interactively representing the preferences of the decision-maker with a point in the criteria (objectives) space, called the reference point. We modelled and solved the following two problems in conservation: a dynamic multi-species management problem under uncertainty and a spatial allocation resource management problem. Results show that the reference point method outperforms classic methods while illustrating the use of an interactive methodology for solving combinatorial problems with multiple objectives. The method is general and can be adapted to a wide range of ecological combinatorial problems. PMID:29293650

  16. Combinatorial optimization problem solution based on improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Peng

    2017-08-01

    Traveling salesman problem (TSP) is a classic combinatorial optimization problem. It is a simplified form of many complex problems. In the process of study and research, it is understood that the parameters that affect the performance of genetic algorithm mainly include the quality of initial population, the population size, and crossover probability and mutation probability values. As a result, an improved genetic algorithm for solving TSP problems is put forward. The population is graded according to individual similarity, and different operations are performed to different levels of individuals. In addition, elitist retention strategy is adopted at each level, and the crossover operator and mutation operator are improved. Several experiments are designed to verify the feasibility of the algorithm. Through the experimental results analysis, it is proved that the improved algorithm can improve the accuracy and efficiency of the solution.

  17. Diffusion Monte Carlo approach versus adiabatic computation for local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Bringewatt, Jacob; Dorland, William; Jordan, Stephen P.; Mink, Alan

    2018-02-01

    Most research regarding quantum adiabatic optimization has focused on stoquastic Hamiltonians, whose ground states can be expressed with only real non-negative amplitudes and thus for whom destructive interference is not manifest. This raises the question of whether classical Monte Carlo algorithms can efficiently simulate quantum adiabatic optimization with stoquastic Hamiltonians. Recent results have given counterexamples in which path-integral and diffusion Monte Carlo fail to do so. However, most adiabatic optimization algorithms, such as for solving MAX-k -SAT problems, use k -local Hamiltonians, whereas our previous counterexample for diffusion Monte Carlo involved n -body interactions. Here we present a 6-local counterexample which demonstrates that even for these local Hamiltonians there are cases where diffusion Monte Carlo cannot efficiently simulate quantum adiabatic optimization. Furthermore, we perform empirical testing of diffusion Monte Carlo on a standard well-studied class of permutation-symmetric tunneling problems and similarly find large advantages for quantum optimization over diffusion Monte Carlo.

  18. Aerodynamic design and optimization in one shot

    NASA Technical Reports Server (NTRS)

    Ta'asan, Shlomo; Kuruvila, G.; Salas, M. D.

    1992-01-01

    This paper describes an efficient numerical approach for the design and optimization of aerodynamic bodies. As in classical optimal control methods, the present approach introduces a cost function and a costate variable (Lagrange multiplier) in order to achieve a minimum. High efficiency is achieved by using a multigrid technique to solve for all the unknowns simultaneously, but restricting work on a design variable only to grids on which their changes produce nonsmooth perturbations. Thus, the effort required to evaluate design variables that have nonlocal effects on the solution is confined to the coarse grids. However, if a variable has a nonsmooth local effect on the solution in some neighborhood, it is relaxed in that neighborhood on finer grids. The cost of solving the optimal control problem is shown to be approximately two to three times the cost of the equivalent analysis problem. Examples are presented to illustrate the application of the method to aerodynamic design and constraint optimization.

  19. Classical statistical mechanics approach to multipartite entanglement

    NASA Astrophysics Data System (ADS)

    Facchi, P.; Florio, G.; Marzolino, U.; Parisi, G.; Pascazio, S.

    2010-06-01

    We characterize the multipartite entanglement of a system of n qubits in terms of the distribution function of the bipartite purity over balanced bipartitions. We search for maximally multipartite entangled states, whose average purity is minimal, and recast this optimization problem into a problem of statistical mechanics, by introducing a cost function, a fictitious temperature and a partition function. By investigating the high-temperature expansion, we obtain the first three moments of the distribution. We find that the problem exhibits frustration.

  20. The Optimal Convergence Rate of the p-Version of the Finite Element Method.

    DTIC Science & Technology

    1985-10-01

    commercial and research codes. The p-version and h-p versions are new developments. There is only one commercial code, the system PROBE ( Noetic Tech, St...Louis). The theoretical aspects have been studied only recently. The first theoretical paper appeared in 1981 (see [7)). See also [1), [7], [81, [9...model problem (2.2) (2.3) is a classical case of the elliptic equation problem on a nonsmooth domain. The structure of this problem is well studied

  1. A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.

    1993-01-01

    A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.

  2. Active subspace: toward scalable low-rank learning.

    PubMed

    Liu, Guangcan; Yan, Shuicheng

    2012-12-01

    We address the scalability issues in low-rank matrix learning problems. Usually these problems resort to solving nuclear norm regularized optimization problems (NNROPs), which often suffer from high computational complexities if based on existing solvers, especially in large-scale settings. Based on the fact that the optimal solution matrix to an NNROP is often low rank, we revisit the classic mechanism of low-rank matrix factorization, based on which we present an active subspace algorithm for efficiently solving NNROPs by transforming large-scale NNROPs into small-scale problems. The transformation is achieved by factorizing the large solution matrix into the product of a small orthonormal matrix (active subspace) and another small matrix. Although such a transformation generally leads to nonconvex problems, we show that a suboptimal solution can be found by the augmented Lagrange alternating direction method. For the robust PCA (RPCA) (Candès, Li, Ma, & Wright, 2009 ) problem, a typical example of NNROPs, theoretical results verify the suboptimality of the solution produced by our algorithm. For the general NNROPs, we empirically show that our algorithm significantly reduces the computational complexity without loss of optimality.

  3. Model-Based Design of Tree WSNs for Decentralized Detection.

    PubMed

    Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam

    2015-08-20

    The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches.

  4. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  5. Does asymmetric correlation affect portfolio optimization?

    NASA Astrophysics Data System (ADS)

    Fryd, Lukas

    2017-07-01

    The classical portfolio optimization problem does not assume asymmetric behavior of relationship among asset returns. The existence of asymmetric response in correlation on the bad news could be important information in portfolio optimization. The paper applies Dynamic conditional correlation model (DCC) and his asymmetric version (ADCC) to propose asymmetric behavior of conditional correlation. We analyse asymmetric correlation among S&P index, bonds index and spot gold price before mortgage crisis in 2008. We evaluate forecast ability of the models during and after mortgage crisis and demonstrate the impact of asymmetric correlation on the reduction of portfolio variance.

  6. A Sustainable City Planning Algorithm Based on TLBO and Local Search

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Lin, Li; Huang, Xuanxuan; Liu, Yiming; Zhang, Yonggang

    2017-09-01

    Nowadays, how to design a city with more sustainable features has become a center problem in the field of social development, meanwhile it has provided a broad stage for the application of artificial intelligence theories and methods. Because the design of sustainable city is essentially a constraint optimization problem, the swarm intelligence algorithm of extensive research has become a natural candidate for solving the problem. TLBO (Teaching-Learning-Based Optimization) algorithm is a new swarm intelligence algorithm. Its inspiration comes from the “teaching” and “learning” behavior of teaching class in the life. The evolution of the population is realized by simulating the “teaching” of the teacher and the student “learning” from each other, with features of less parameters, efficient, simple thinking, easy to achieve and so on. It has been successfully applied to scheduling, planning, configuration and other fields, which achieved a good effect and has been paid more and more attention by artificial intelligence researchers. Based on the classical TLBO algorithm, we propose a TLBO_LS algorithm combined with local search. We design and implement the random generation algorithm and evaluation model of urban planning problem. The experiments on the small and medium-sized random generation problem showed that our proposed algorithm has obvious advantages over DE algorithm and classical TLBO algorithm in terms of convergence speed and solution quality.

  7. Optimal control, optimization and asymptotic analysis of Purcell's microswimmer model

    NASA Astrophysics Data System (ADS)

    Wiezel, Oren; Or, Yizhar

    2016-11-01

    Purcell's swimmer (1977) is a classic model of a three-link microswimmer that moves by performing periodic shape changes. Becker et al. (2003) showed that the swimmer's direction of net motion is reversed upon increasing the stroke amplitude of joint angles. Tam and Hosoi (2007) used numerical optimization in order to find optimal gaits for maximizing either net displacement or Lighthill's energetic efficiency. In our work, we analytically derive leading-order expressions as well as next-order corrections for both net displacement and energetic efficiency of Purcell's microswimmer. Using these expressions enables us to explicitly show the reversal in direction of motion, as well as obtaining an estimate for the optimal stroke amplitude. We also find the optimal swimmer's geometry for maximizing either displacement or energetic efficiency. Additionally, the gait optimization problem is revisited and analytically formulated as an optimal control system with only two state variables, which can be solved using Pontryagin's maximum principle. It can be shown that the optimal solution must follow a "singular arc". Numerical solution of the boundary value problem is obtained, which exactly reproduces Tam and Hosoi's optimal gait.

  8. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  9. Rotation in vibration, optimization, and aeroelastic stability problems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Kaza, K. R. V.

    1974-01-01

    The effects of rotation in the areas of vibrations, dynamic stability, optimization, and aeroelasticity were studied. The governing equations of motion for the study of vibration and dynamic stability of a rapidly rotating deformable body were developed starting from the nonlinear theory of elasticity. Some common features such as the limitations of the classical theory of elasticity, the choice of axis system, the property of self-adjointness, the phenomenon of frequency splitting, shortcomings of stability methods as applied to gyroscopic systems, and the effect of internal and external damping on stability in gyroscopic systems are identified and discussed, and are then applied to three specific problems.

  10. Genetic Algorithm for Initial Orbit Determination with Too Short Arc

    NASA Astrophysics Data System (ADS)

    Li, Xin-ran; Wang, Xin

    2017-01-01

    A huge quantity of too-short-arc (TSA) observational data have been obtained in sky surveys of space objects. However, reasonable results for the TSAs can hardly be obtained with the classical methods of initial orbit determination (IOD). In this paper, the IOD is reduced to a two-stage hierarchical optimization problem containing three variables for each stage. Using the genetic algorithm, a new method of the IOD for TSAs is established, through the selections of the optimized variables and the corresponding genetic operators for specific problems. Numerical experiments based on the real measurements show that the method can provide valid initial values for the follow-up work.

  11. Genetic Algorithm for Initial Orbit Determination with Too Short Arc

    NASA Astrophysics Data System (ADS)

    Li, X. R.; Wang, X.

    2016-01-01

    The sky surveys of space objects have obtained a huge quantity of too-short-arc (TSA) observation data. However, the classical method of initial orbit determination (IOD) can hardly get reasonable results for the TSAs. The IOD is reduced to a two-stage hierarchical optimization problem containing three variables for each stage. Using the genetic algorithm, a new method of the IOD for TSAs is established, through the selection of optimizing variables as well as the corresponding genetic operator for specific problems. Numerical experiments based on the real measurements show that the method can provide valid initial values for the follow-up work.

  12. A study of the influence of the sun on optimal two-impulse Earth-to-Moon trajectories with moderate time of flight in the three-body and four-body models

    NASA Astrophysics Data System (ADS)

    Filho, Luiz Arthur Gagg; da Silva Fernandes, Sandro

    2017-05-01

    In this work, a study about the influence of the Sun on optimal two-impulse Earth-to-Moon trajectories for interior transfers with moderate time of flight is presented considering the three-body and the four-body models. The optimization criterion is the total characteristic velocity which represents the fuel consumption of an infinite thrust propulsion system. The optimization problem has been formulated using the classic planar circular restricted three-body problem (PCR3BP) and the planar bi-circular restricted four-body problem (PBR4BP), and, it consists of transferring a spacecraft from a circular low Earth orbit (LEO) to a circular low Moon orbit (LMO) with minimum fuel consumption. The Sequential Gradient Restoration Algorithm (SGRA) is applied to determine the optimal solutions. Numerical results are presented for several final altitudes of a clockwise or a counterclockwise circular low Moon orbit considering a specified altitude of a counterclockwise circular low Earth orbit. Two types of analysis are performed: in the first one, the initial position of the Sun is taken as a parameter and the major parameters describing the optimal trajectories are obtained by solving an optimization problem of one degree of freedom. In the second analysis, an optimization problem with two degrees of freedom is considered and the initial position of the Sun is taken as an additional unknown.

  13. Design optimization of steel frames using an enhanced firefly algorithm

    NASA Astrophysics Data System (ADS)

    Carbas, Serdar

    2016-12-01

    Mathematical modelling of real-world-sized steel frames under the Load and Resistance Factor Design-American Institute of Steel Construction (LRFD-AISC) steel design code provisions, where the steel profiles for the members are selected from a table of steel sections, turns out to be a discrete nonlinear programming problem. Finding the optimum design of such design optimization problems using classical optimization techniques is difficult. Metaheuristic algorithms provide an alternative way of solving such problems. The firefly algorithm (FFA) belongs to the swarm intelligence group of metaheuristics. The standard FFA has the drawback of being caught up in local optima in large-sized steel frame design problems. This study attempts to enhance the performance of the FFA by suggesting two new expressions for the attractiveness and randomness parameters of the algorithm. Two real-world-sized design examples are designed by the enhanced FFA and its performance is compared with standard FFA as well as with particle swarm and cuckoo search algorithms.

  14. Quantum chi-squared and goodness of fit testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Temme, Kristan; Verstraete, Frank

    2015-01-15

    A quantum mechanical hypothesis test is presented for the hypothesis that a certain setup produces a given quantum state. Although the classical and the quantum problems are very much related to each other, the quantum problem is much richer due to the additional optimization over the measurement basis. A goodness of fit test for i.i.d quantum states is developed and a max-min characterization for the optimal measurement is introduced. We find the quantum measurement which leads both to the maximal Pitman and Bahadur efficiencies, and determine the associated divergence rates. We discuss the relationship of the quantum goodness of fitmore » test to the problem of estimating multiple parameters from a density matrix. These problems are found to be closely related and we show that the largest error of an optimal strategy, determined by the smallest eigenvalue of the Fisher information matrix, is given by the divergence rate of the goodness of fit test.« less

  15. Determination of optimal self-drive tourism route using the orienteering problem method

    NASA Astrophysics Data System (ADS)

    Hashim, Zakiah; Ismail, Wan Rosmanira; Ahmad, Norfaieqah

    2013-04-01

    This paper was conducted to determine the optimal travel routes for self-drive tourism based on the allocation of time and expense by maximizing the amount of attraction scores assigned to each city involved. Self-drive tourism represents a type of tourism where tourists hire or travel by their own vehicle. It only involves a tourist destination which can be linked with a network of roads. Normally, the traveling salesman problem (TSP) and multiple traveling salesman problems (MTSP) method were used in the minimization problem such as determination the shortest time or distance traveled. This paper involved an alternative approach for maximization method which is maximize the attraction scores and tested on tourism data for ten cities in Kedah. A set of priority scores are used to set the attraction score at each city. The classical approach of the orienteering problem was used to determine the optimal travel route. This approach is extended to the team orienteering problem and the two methods were compared. These two models have been solved by using LINGO12.0 software. The results indicate that the model involving the team orienteering problem provides a more appropriate solution compared to the orienteering problem model.

  16. Improved understanding of the searching behavior of ant colony optimization algorithms applied to the water distribution design problem

    NASA Astrophysics Data System (ADS)

    Zecchin, A. C.; Simpson, A. R.; Maier, H. R.; Marchi, A.; Nixon, J. B.

    2012-09-01

    Evolutionary algorithms (EAs) have been applied successfully to many water resource problems, such as system design, management decision formulation, and model calibration. The performance of an EA with respect to a particular problem type is dependent on how effectively its internal operators balance the exploitation/exploration trade-off to iteratively find solutions of an increasing quality. For a given problem, different algorithms are observed to produce a variety of different final performances, but there have been surprisingly few investigations into characterizing how the different internal mechanisms alter the algorithm's searching behavior, in both the objective and decision space, to arrive at this final performance. This paper presents metrics for analyzing the searching behavior of ant colony optimization algorithms, a particular type of EA, for the optimal water distribution system design problem, which is a classical NP-hard problem in civil engineering. Using the proposed metrics, behavior is characterized in terms of three different attributes: (1) the effectiveness of the search in improving its solution quality and entering into optimal or near-optimal regions of the search space, (2) the extent to which the algorithm explores as it converges to solutions, and (3) the searching behavior with respect to the feasible and infeasible regions. A range of case studies is considered, where a number of ant colony optimization variants are applied to a selection of water distribution system optimization problems. The results demonstrate the utility of the proposed metrics to give greater insight into how the internal operators affect each algorithm's searching behavior.

  17. Markerless human motion tracking using hierarchical multi-swarm cooperative particle swarm optimization.

    PubMed

    Saini, Sanjay; Zakaria, Nordin; Rambli, Dayang Rohaya Awang; Sulaiman, Suziah

    2015-01-01

    The high-dimensional search space involved in markerless full-body articulated human motion tracking from multiple-views video sequences has led to a number of solutions based on metaheuristics, the most recent form of which is Particle Swarm Optimization (PSO). However, the classical PSO suffers from premature convergence and it is trapped easily into local optima, significantly affecting the tracking accuracy. To overcome these drawbacks, we have developed a method for the problem based on Hierarchical Multi-Swarm Cooperative Particle Swarm Optimization (H-MCPSO). The tracking problem is formulated as a non-linear 34-dimensional function optimization problem where the fitness function quantifies the difference between the observed image and a projection of the model configuration. Both the silhouette and edge likelihoods are used in the fitness function. Experiments using Brown and HumanEva-II dataset demonstrated that H-MCPSO performance is better than two leading alternative approaches-Annealed Particle Filter (APF) and Hierarchical Particle Swarm Optimization (HPSO). Further, the proposed tracking method is capable of automatic initialization and self-recovery from temporary tracking failures. Comprehensive experimental results are presented to support the claims.

  18. Exploiting Quantum Resonance to Solve Combinatorial Problems

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Fijany, Amir

    2006-01-01

    Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.

  19. An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

    DTIC Science & Technology

    2012-08-17

    the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2

  20. Quantum simulator review

    NASA Astrophysics Data System (ADS)

    Bednar, Earl; Drager, Steven L.

    2007-04-01

    Quantum information processing's objective is to utilize revolutionary computing capability based on harnessing the paradigm shift offered by quantum computing to solve classically hard and computationally challenging problems. Some of our computationally challenging problems of interest include: the capability for rapid image processing, rapid optimization of logistics, protecting information, secure distributed simulation, and massively parallel computation. Currently, one important problem with quantum information processing is that the implementation of quantum computers is difficult to realize due to poor scalability and great presence of errors. Therefore, we have supported the development of Quantum eXpress and QuIDD Pro, two quantum computer simulators running on classical computers for the development and testing of new quantum algorithms and processes. This paper examines the different methods used by these two quantum computing simulators. It reviews both simulators, highlighting each simulators background, interface, and special features. It also demonstrates the implementation of current quantum algorithms on each simulator. It concludes with summary comments on both simulators.

  1. Field development planning using simulated annealing - optimal economic well scheduling and placement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckner, B.L.; Xong, X.

    1995-12-31

    A method for optimizing the net present value of a full field development by varying the placement and sequence of production wells is presented. This approach is automated and combines an economics package and Mobil`s in-house simulator, PEGASUS, within a simulated annealing optimization engine. A novel framing of the well placement and scheduling problem as a classic {open_quotes}travelling salesman problem{close_quotes} is required before optimization via simulated annealing can be applied practically. An example of a full field development using this technique shows that non-uniform well spacings are optimal (from an NPV standpoint) when the effects of well interference and variablemore » reservoir properties are considered. Examples of optimizing field NPV with variable well costs also show that non-uniform wells spacings are optimal. Project NPV increases of 25 to 30 million dollars were shown using the optimal, nonuniform development versus reasonable, uniform developments. The ability of this technology to deduce these non-uniform well spacings opens up many potential applications that should materially impact the economic performance of field developments.« less

  2. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  3. Neural dynamic optimization for control systems. I. Background.

    PubMed

    Seong, C Y; Widrow, B

    2001-01-01

    The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the background and motivations for the development of NDO, while the two other subsequent papers of this topic present the theory of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.

  4. Neural dynamic optimization for control systems.III. Applications.

    PubMed

    Seong, C Y; Widrow, B

    2001-01-01

    For pt.II. see ibid., p. 490-501. The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper demonstrates NDO with several applications including control of autonomous vehicles and of a robot-arm, while the two other companion papers of this topic describes the background for the development of NDO and present the theory of the method, respectively.

  5. Neural dynamic optimization for control systems.II. Theory.

    PubMed

    Seong, C Y; Widrow, B

    2001-01-01

    The paper presents neural dynamic optimization (NDO) as a method of optimal feedback control for nonlinear multi-input-multi-output (MIMO) systems. The main feature of NDO is that it enables neural networks to approximate the optimal feedback solution whose existence dynamic programming (DP) justifies, thereby reducing the complexities of computation and storage problems of the classical methods such as DP. This paper mainly describes the theory of NDO, while the two other companion papers of this topic explain the background for the development of NDO and demonstrate the method with several applications including control of autonomous vehicles and of a robot arm, respectively.

  6. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel A.

    2016-11-01

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark simulated annealing on a class of maximum-2-satisfiability (MAX2SAT) problems. We also compare the performance of a D-Wave 2X quantum annealer to the Hamze-Freitas-Selby (HFS) solver, a specialized classical heuristic algorithm designed for low-tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N =1098 variables, the D-Wave device is 2 orders of magnitude faster than the HFS solver, and, modulo known caveats related to suboptimal annealing times, exhibits identical scaling with problem size.

  7. Model-Based Design of Tree WSNs for Decentralized Detection †

    PubMed Central

    Tantawy, Ashraf; Koutsoukos, Xenofon; Biswas, Gautam

    2015-01-01

    The classical decentralized detection problem of finding the optimal decision rules at the sensor and fusion center, as well as variants that introduce physical channel impairments have been studied extensively in the literature. The deployment of WSNs in decentralized detection applications brings new challenges to the field. Protocols for different communication layers have to be co-designed to optimize the detection performance. In this paper, we consider the communication network design problem for a tree WSN. We pursue a system-level approach where a complete model for the system is developed that captures the interactions between different layers, as well as different sensor quality measures. For network optimization, we propose a hierarchical optimization algorithm that lends itself to the tree structure, requiring only local network information. The proposed design approach shows superior performance over several contentionless and contention-based network design approaches. PMID:26307989

  8. Optimal Shakedown of the Thin-Wall Metal Structures Under Strength and Stiffness Constraints

    NASA Astrophysics Data System (ADS)

    Alawdin, Piotr; Liepa, Liudas

    2017-06-01

    Classical optimization problems of metal structures confined mainly with 1st class cross-sections. But in practice it is common to use the cross-sections of higher classes. In this paper, a new mathematical model for described shakedown optimization problem for metal structures, which elements are designed from 1st to 4th class cross-sections, under variable quasi-static loads is presented. The features of limited plastic redistribution of forces in the structure with thin-walled elements there are taken into account. Authors assume the elastic-plastic flexural buckling in one plane without lateral torsional buckling behavior of members. Design formulae for Methods 1 and 2 for members are analyzed. Structures stiffness constrains are also incorporated in order to satisfy the limit serviceability state requirements. With the help of mathematical programming theory and extreme principles the structure optimization algorithm is developed and justified with the numerical experiment for the metal plane frames.

  9. An opinion formation based binary optimization approach for feature selection

    NASA Astrophysics Data System (ADS)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klimsiak, Tomasz, E-mail: tomas@mat.umk.pl; Rozkosz, Andrzej, E-mail: rozkosz@mat.umk.pl

    In the paper we consider the problem of valuation of American options written on dividend-paying assets whose price dynamics follow the classical multidimensional Black and Scholes model. We provide a general early exercise premium representation formula for options with payoff functions which are convex or satisfy mild regularity assumptions. Examples include index options, spread options, call on max options, put on min options, multiply strike options and power-product options. In the proof of the formula we exploit close connections between the optimal stopping problems associated with valuation of American options, obstacle problems and reflected backward stochastic differential equations.

  11. An Overview of Importance Splitting for Rare Event Simulation

    ERIC Educational Resources Information Center

    Morio, Jerome; Pastel, Rudy; Le Gland, Francois

    2010-01-01

    Monte Carlo simulations are a classical tool to analyse physical systems. When unlikely events are to be simulated, the importance sampling technique is often used instead of Monte Carlo. Importance sampling has some drawbacks when the problem dimensionality is high or when the optimal importance sampling density is complex to obtain. In this…

  12. A random walk approach to quantum algorithms.

    PubMed

    Kendon, Vivien M

    2006-12-15

    The development of quantum algorithms based on quantum versions of random walks is placed in the context of the emerging field of quantum computing. Constructing a suitable quantum version of a random walk is not trivial; pure quantum dynamics is deterministic, so randomness only enters during the measurement phase, i.e. when converting the quantum information into classical information. The outcome of a quantum random walk is very different from the corresponding classical random walk owing to the interference between the different possible paths. The upshot is that quantum walkers find themselves further from their starting point than a classical walker on average, and this forms the basis of a quantum speed up, which can be exploited to solve problems faster. Surprisingly, the effect of making the walk slightly less than perfectly quantum can optimize the properties of the quantum walk for algorithmic applications. Looking to the future, even with a small quantum computer available, the development of quantum walk algorithms might proceed more rapidly than it has, especially for solving real problems.

  13. Hybrid annealing: Coupling a quantum simulator to a classical computer

    NASA Astrophysics Data System (ADS)

    Graß, Tobias; Lewenstein, Maciej

    2017-05-01

    Finding the global minimum in a rugged potential landscape is a computationally hard task, often equivalent to relevant optimization problems. Annealing strategies, either classical or quantum, explore the configuration space by evolving the system under the influence of thermal or quantum fluctuations. The thermal annealing dynamics can rapidly freeze the system into a low-energy configuration, and it can be simulated well on a classical computer, but it easily gets stuck in local minima. Quantum annealing, on the other hand, can be guaranteed to find the true ground state and can be implemented in modern quantum simulators; however, quantum adiabatic schemes become prohibitively slow in the presence of quasidegeneracies. Here, we propose a strategy which combines ideas from simulated annealing and quantum annealing. In such a hybrid algorithm, the outcome of a quantum simulator is processed on a classical device. While the quantum simulator explores the configuration space by repeatedly applying quantum fluctuations and performing projective measurements, the classical computer evaluates each configuration and enforces a lowering of the energy. We have simulated this algorithm for small instances of the random energy model, showing that it potentially outperforms both simulated thermal annealing and adiabatic quantum annealing. It becomes most efficient for problems involving many quasidegenerate ground states.

  14. Beyond Moore's law: towards competitive quantum devices

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2015-05-01

    A century after the invention of quantum theory and fifty years after Bell's inequality we see the first quantum devices emerge as products that aim to be competitive with the best classical computing devices. While a universal quantum computer of non-trivial size is still out of reach there exist a number commercial and experimental devices: quantum random number generators, quantum simulators and quantum annealers. In this colloquium I will present some of these devices and validation tests we performed on them. Quantum random number generators use the inherent randomness in quantum measurements to produce true random numbers, unlike classical pseudorandom number generators which are inherently deterministic. Optical lattice emulators use ultracold atomic gases in optical lattices to mimic typical models of condensed matter physics. In my talk I will focus especially on the devices built by Canadian company D-Wave systems, which are special purpose quantum simulators for solving hard classical optimization problems. I will review the controversy around the quantum nature of these devices and will compare them to state of the art classical algorithms. I will end with an outlook towards universal quantum computing and end with the question: which important problems that are intractable even for post-exa-scale classical computers could we expect to solve once we have a universal quantum computer?

  15. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  16. A novel approach for inventory problem in the pharmaceutical supply chain.

    PubMed

    Candan, Gökçe; Yazgan, Harun Reşit

    2016-02-24

    In pharmaceutical enterprises, keeping up with global market conditions is possible with properly selected supply chain management policies. Generally; demand-driven classical supply chain model is used in the pharmaceutical industry. In this study, a new mathematical model is developed to solve an inventory problem in the pharmaceutical supply chain. Unlike the studies in literature, the "shelf life and product transition times" constraints are considered, simultaneously, first time in the pharmaceutical production inventory problem. The problem is formulated as a mixed-integer linear programming (MILP) model with a hybrid time representation. The objective is to maximize total net profit. Effectiveness of the proposed model is illustrated considering a classical and a vendor managed inventory (VMI) supply chain on an experimental study. To show the effectiveness of the model, an experimental study is performed; which contains 2 different supply chain policy (Classical and VMI), 24 and 30 months planning horizon, 10 and 15 different cephalosporin products. Finally the mathematical model is compared to another model in literature and the results show that proposed model is superior. This study suggest a novel approach for solving pharmaceutical inventory problem. The developed model is maximizing total net profit while determining optimal production plan under shelf life and product transition constraints in the pharmaceutical industry. And we believe that the proposed model is much more closed to real life unlike the other studies in literature.

  17. Active vibration mitigation of distributed parameter, smart-type structures using Pseudo-Feedback Optimal Control (PFOC)

    NASA Technical Reports Server (NTRS)

    Patten, W. N.; Robertshaw, H. H.; Pierpont, D.; Wynn, R. H.

    1989-01-01

    A new, near-optimal feedback control technique is introduced that is shown to provide excellent vibration attenuation for those distributed parameter systems that are often encountered in the areas of aeroservoelasticity and large space systems. The technique relies on a novel solution methodology for the classical optimal control problem. Specifically, the quadratic regulator control problem for a flexible vibrating structure is first cast in a weak functional form that admits an approximate solution. The necessary conditions (first-order) are then solved via a time finite-element method. The procedure produces a low dimensional, algebraic parameterization of the optimal control problem that provides a rigorous basis for a discrete controller with a first-order like hold output. Simulation has shown that the algorithm can successfully control a wide variety of plant forms including multi-input/multi-output systems and systems exhibiting significant nonlinearities. In order to firmly establish the efficacy of the algorithm, a laboratory control experiment was implemented to provide planar (bending) vibration attenuation of a highly flexible beam (with a first clamped-free mode of approximately 0.5 Hz).

  18. The Shortlist Method for fast computation of the Earth Mover's Distance and finding optimal solutions to transportation problems.

    PubMed

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method.

  19. The Shortlist Method for Fast Computation of the Earth Mover's Distance and Finding Optimal Solutions to Transportation Problems

    PubMed Central

    Gottschlich, Carsten; Schuhmacher, Dominic

    2014-01-01

    Finding solutions to the classical transportation problem is of great importance, since this optimization problem arises in many engineering and computer science applications. Especially the Earth Mover's Distance is used in a plethora of applications ranging from content-based image retrieval, shape matching, fingerprint recognition, object tracking and phishing web page detection to computing color differences in linguistics and biology. Our starting point is the well-known revised simplex algorithm, which iteratively improves a feasible solution to optimality. The Shortlist Method that we propose substantially reduces the number of candidates inspected for improving the solution, while at the same time balancing the number of pivots required. Tests on simulated benchmarks demonstrate a considerable reduction in computation time for the new method as compared to the usual revised simplex algorithm implemented with state-of-the-art initialization and pivot strategies. As a consequence, the Shortlist Method facilitates the computation of large scale transportation problems in viable time. In addition we describe a novel method for finding an initial feasible solution which we coin Modified Russell's Method. PMID:25310106

  20. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  1. Solution to automatic generation control problem using firefly algorithm optimized I(λ)D(µ) controller.

    PubMed

    Debbarma, Sanjoy; Saikia, Lalit Chandra; Sinha, Nidul

    2014-03-01

    Present work focused on automatic generation control (AGC) of a three unequal area thermal systems considering reheat turbines and appropriate generation rate constraints (GRC). A fractional order (FO) controller named as I(λ)D(µ) controller based on crone approximation is proposed for the first time as an appropriate technique to solve the multi-area AGC problem in power systems. A recently developed metaheuristic algorithm known as firefly algorithm (FA) is used for the simultaneous optimization of the gains and other parameters such as order of integrator (λ) and differentiator (μ) of I(λ)D(µ) controller and governor speed regulation parameters (R). The dynamic responses corresponding to optimized I(λ)D(µ) controller gains, λ, μ, and R are compared with that of classical integer order (IO) controllers such as I, PI and PID controllers. Simulation results show that the proposed I(λ)D(µ) controller provides more improved dynamic responses and outperforms the IO based classical controllers. Further, sensitivity analysis confirms the robustness of the so optimized I(λ)D(µ) controller to wide changes in system loading conditions and size and position of SLP. Proposed controller is also found to have performed well as compared to IO based controllers when SLP takes place simultaneously in any two areas or all the areas. Robustness of the proposed I(λ)D(µ) controller is also tested against system parameter variations. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Optimal Control of the Parametric Oscillator

    ERIC Educational Resources Information Center

    Andresen, B.; Hoffmann, K. H.; Nulton, J.; Tsirlin, A.; Salamon, P.

    2011-01-01

    We present a solution to the minimum time control problem for a classical harmonic oscillator to reach a target energy E[subscript T] from a given initial state (q[subscript i], p[subscript i]) by controlling its frequency [omega], [omega][subscript min] less than or equal to [omega] less than or equal to [omega][subscript max]. A brief synopsis…

  3. The choice of sample size: a mixed Bayesian / frequentist approach.

    PubMed

    Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Gittins, John

    2009-04-01

    Sample size computations are largely based on frequentist or classical methods. In the Bayesian approach the prior information on the unknown parameters is taken into account. In this work we consider a fully Bayesian approach to the sample size determination problem which was introduced by Grundy et al. and developed by Lindley. This approach treats the problem as a decision problem and employs a utility function to find the optimal sample size of a trial. Furthermore, we assume that a regulatory authority, which is deciding on whether or not to grant a licence to a new treatment, uses a frequentist approach. We then find the optimal sample size for the trial by maximising the expected net benefit, which is the expected benefit of subsequent use of the new treatment minus the cost of the trial.

  4. Probing for quantum speedup in spin-glass problems with planted solutions

    NASA Astrophysics Data System (ADS)

    Hen, Itay; Job, Joshua; Albash, Tameem; Rønnow, Troels F.; Troyer, Matthias; Lidar, Daniel A.

    2015-10-01

    The availability of quantum annealing devices with hundreds of qubits has made the experimental demonstration of a quantum speedup for optimization problems a coveted, albeit elusive goal. Going beyond earlier studies of random Ising problems, here we introduce a method to construct a set of frustrated Ising-model optimization problems with tunable hardness. We study the performance of a D-Wave Two device (DW2) with up to 503 qubits on these problems and compare it to a suite of classical algorithms, including a highly optimized algorithm designed to compete directly with the DW2. The problems are generated around predetermined ground-state configurations, called planted solutions, which makes them particularly suitable for benchmarking purposes. The problem set exhibits properties familiar from constraint satisfaction (SAT) problems, such as a peak in the typical hardness of the problems, determined by a tunable clause density parameter. We bound the hardness regime where the DW2 device either does not or might exhibit a quantum speedup for our problem set. While we do not find evidence for a speedup for the hardest and most frustrated problems in our problem set, we cannot rule out that a speedup might exist for some of the easier, less frustrated problems. Our empirical findings pertain to the specific D-Wave processor and problem set we studied and leave open the possibility that future processors might exhibit a quantum speedup on the same problem set.

  5. The Artificial Neural Networks Based on Scalarization Method for a Class of Bilevel Biobjective Programming Problem

    PubMed Central

    Chen, Zhong; Liu, June; Li, Xiong

    2017-01-01

    A two-stage artificial neural network (ANN) based on scalarization method is proposed for bilevel biobjective programming problem (BLBOP). The induced set of the BLBOP is firstly expressed as the set of minimal solutions of a biobjective optimization problem by using scalar approach, and then the whole efficient set of the BLBOP is derived by the proposed two-stage ANN for exploring the induced set. In order to illustrate the proposed method, seven numerical examples are tested and compared with results in the classical literature. Finally, a practical problem is solved by the proposed algorithm. PMID:29312446

  6. Boosting quantum annealer performance via sample persistence

    NASA Astrophysics Data System (ADS)

    Karimi, Hamed; Rosenberg, Gili

    2017-07-01

    We propose a novel method for reducing the number of variables in quadratic unconstrained binary optimization problems, using a quantum annealer (or any sampler) to fix the value of a large portion of the variables to values that have a high probability of being optimal. The resulting problems are usually much easier for the quantum annealer to solve, due to their being smaller and consisting of disconnected components. This approach significantly increases the success rate and number of observations of the best known energy value in samples obtained from the quantum annealer, when compared with calling the quantum annealer without using it, even when using fewer annealing cycles. Use of the method results in a considerable improvement in success metrics even for problems with high-precision couplers and biases, which are more challenging for the quantum annealer to solve. The results are further enhanced by applying the method iteratively and combining it with classical pre-processing. We present results for both Chimera graph-structured problems and embedded problems from a real-world application.

  7. Optimal Routing and Control of Multiple Agents Moving in a Transportation Network and Subject to an Arrival Schedule and Separation Constraints

    NASA Technical Reports Server (NTRS)

    Sadovsky, A. V.; Davis, D.; Isaacson, D. R.

    2012-01-01

    We address the problem of navigating a set of moving agents, e.g. automated guided vehicles, through a transportation network so as to bring each agent to its destination at a specified time. Each pair of agents is required to be separated by a minimal distance, generally agent-dependent, at all times. The speed range, initial position, required destination, and required time of arrival at destination for each agent are assumed provided. The movement of each agent is governed by a controlled differential equation (state equation). The problem consists in choosing for each agent a path and a control strategy so as to meet the constraints and reach the destination at the required time. This problem arises in various fields of transportation, including Air Traffic Management and train coordination, and in robotics. The main contribution of the paper is a model that allows to recast this problem as a decoupled collection of problems in classical optimal control and is easily generalized to the case when inertia cannot be neglected. Some qualitative insight into solution behavior is obtained using the Pontryagin Maximum Principle. Sample numerical solutions are computed using a numerical optimal control solver.

  8. Price sensitive demand with random sales price - a newsboy problem

    NASA Astrophysics Data System (ADS)

    Sankar Sana, Shib

    2012-03-01

    Up to now, many newsboy problems have been considered in the stochastic inventory literature. Some assume that stochastic demand is independent of selling price (p) and others consider the demand as a function of stochastic shock factor and deterministic sales price. This article introduces a price-dependent demand with stochastic selling price into the classical Newsboy problem. The proposed model analyses the expected average profit for a general distribution function of p and obtains an optimal order size. Finally, the model is discussed for various appropriate distribution functions of p and illustrated with numerical examples.

  9. Performance Models for Split-execution Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Schrock, Jonathan

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less

  10. Complexity of the Quantum Adiabatic Algorithm

    NASA Technical Reports Server (NTRS)

    Hen, Itay

    2013-01-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorithms.

  11. Efficient fractal-based mutation in evolutionary algorithms from iterated function systems

    NASA Astrophysics Data System (ADS)

    Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.

    2018-03-01

    In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.

  12. Quantum and classical dynamics in adiabatic computation

    NASA Astrophysics Data System (ADS)

    Crowley, P. J. D.; Äńurić, T.; Vinci, W.; Warburton, P. A.; Green, A. G.

    2014-10-01

    Adiabatic transport provides a powerful way to manipulate quantum states. By preparing a system in a readily initialized state and then slowly changing its Hamiltonian, one may achieve quantum states that would otherwise be inaccessible. Moreover, a judicious choice of final Hamiltonian whose ground state encodes the solution to a problem allows adiabatic transport to be used for universal quantum computation. However, the dephasing effects of the environment limit the quantum correlations that an open system can support and degrade the power of such adiabatic computation. We quantify this effect by allowing the system to evolve over a restricted set of quantum states, providing a link between physically inspired classical optimization algorithms and quantum adiabatic optimization. This perspective allows us to develop benchmarks to bound the quantum correlations harnessed by an adiabatic computation. We apply these to the D-Wave Vesuvius machine with revealing—though inconclusive—results.

  13. Estimating parameters with pre-specified accuracies in distributed parameter systems using optimal experiment design

    NASA Astrophysics Data System (ADS)

    Potters, M. G.; Bombois, X.; Mansoori, M.; Hof, Paul M. J. Van den

    2016-08-01

    Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.

  14. Expectation maximization for hard X-ray count modulation profiles

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.

    2013-07-01

    Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.

  15. Hybrid switched time-optimal control of underactuated spacecraft

    NASA Astrophysics Data System (ADS)

    Olivares, Alberto; Staffetti, Ernesto

    2018-04-01

    This paper studies the time-optimal control problem for an underactuated rigid spacecraft equipped with both reaction wheels and gas jet thrusters that generate control torques about two of the principal axes of the spacecraft. Since a spacecraft equipped with two reaction wheels is not controllable, whereas a spacecraft equipped with two gas jet thrusters is controllable, this mixed actuation ensures controllability in the case in which one of the control axes is unactuated. A novel control logic is proposed for this hybrid actuation in which the reaction wheels are the main actuators and the gas jet thrusters act only after saturation or anticipating future saturation of the reaction wheels. The presence of both reaction wheels and gas jet thrusters gives rise to two operating modes for each actuated axis and therefore the spacecraft can be regarded as a switched dynamical system. The time-optimal control problem for this system is reformulated using the so-called embedding technique and the resulting problem is a classical optimal control problem. The main advantages of this technique are that integer or binary variables do not have to be introduced to model switching decisions between modes and that assumptions about the number of switches are not necessary. It is shown in this paper that this general method for the solution of optimal control problems for switched dynamical systems can efficiently deal with time-optimal control of an underactuated rigid spacecraft in which bound constraints on the torque of the actuators and on the angular momentum of the reaction wheels are taken into account.

  16. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  17. Learning With Mixed Hard/Soft Pointwise Constraints.

    PubMed

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-09-01

    A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

  18. Texture mapping via optimal mass transport.

    PubMed

    Dominitz, Ayelet; Tannenbaum, Allen

    2010-01-01

    In this paper, we present a novel method for texture mapping of closed surfaces. Our method is based on the technique of optimal mass transport (also known as the "earth-mover's metric"). This is a classical problem that concerns determining the optimal way, in the sense of minimal transportation cost, of moving a pile of soil from one site to another. In our context, the resulting mapping is area preserving and minimizes angle distortion in the optimal mass sense. Indeed, we first begin with an angle-preserving mapping (which may greatly distort area) and then correct it using the mass transport procedure derived via a certain gradient flow. In order to obtain fast convergence to the optimal mapping, we incorporate a multiresolution scheme into our flow. We also use ideas from discrete exterior calculus in our computations.

  19. Optimization of the Bridgman crystal growth process

    NASA Astrophysics Data System (ADS)

    Margulies, M.; Witomski, P.; Duffar, T.

    2004-05-01

    A numerical optimization method of the vertical Bridgman growth configuration is presented and developed. It permits to optimize the furnace temperature field and the pulling rate versus time in order to decrease the radial thermal gradients in the sample. Some constraints are also included in order to insure physically realistic results. The model includes the two classical non-linearities associated to crystal growth processes, the radiative thermal exchange and the release of latent heat at the solid-liquid interface. The mathematical analysis and development of the problem is shortly described. On some examples, it is shown that the method works in a satisfactory way; however the results are dependent on the numerical parameters. Improvements of the optimization model, on the physical and numerical point of view, are suggested.

  20. Contraction Options and Optimal Multiple-Stopping in Spectrally Negative Lévy Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamazaki, Kazutoshi, E-mail: kyamazak@kansai-u.ac.jp

    This paper studies the optimal multiple-stopping problem arising in the context of the timing option to withdraw from a project in stages. The profits are driven by a general spectrally negative Lévy process. This allows the model to incorporate sudden declines of the project values, generalizing greatly the classical geometric Brownian motion model. We solve the one-stage case as well as the extension to the multiple-stage case. The optimal stopping times are of threshold-type and the value function admits an expression in terms of the scale function. A series of numerical experiments are conducted to verify the optimality and tomore » evaluate the efficiency of the algorithm.« less

  1. Upper Limits for Power Yield in Thermal, Chemical, and Electrochemical Systems

    NASA Astrophysics Data System (ADS)

    Sieniutycz, Stanislaw

    2010-03-01

    We consider modeling and power optimization of energy converters, such as thermal, solar and chemical engines and fuel cells. Thermodynamic principles lead to expressions for converter's efficiency and generated power. Efficiency equations serve to solve the problems of upgrading or downgrading a resource. Power yield is a cumulative effect in a system consisting of a resource, engines, and an infinite bath. While optimization of steady state systems requires using the differential calculus and Lagrange multipliers, dynamic optimization involves variational calculus and dynamic programming. The primary result of static optimization is the upper limit of power, whereas that of dynamic optimization is a finite-rate counterpart of classical reversible work (exergy). The latter quantity depends on the end state coordinates and a dissipation index, h, which is the Hamiltonian of the problem of minimum entropy production. In reacting systems, an active part of chemical affinity constitutes a major component of the overall efficiency. The theory is also applied to fuel cells regarded as electrochemical flow engines. Enhanced bounds on power yield follow, which are stronger than those predicted by the reversible work potential.

  2. Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization

    NASA Astrophysics Data System (ADS)

    Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar

    2017-04-01

    Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.

  3. 3D gravity inversion and uncertainty assessment of basement relief via Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pallero, J. L. G.; Fernández-Martínez, J. L.; Bonvalot, S.; Fudym, O.

    2017-04-01

    Nonlinear gravity inversion in sedimentary basins is a classical problem in applied geophysics. Although a 2D approximation is widely used, 3D models have been also proposed to better take into account the basin geometry. A common nonlinear approach to this 3D problem consists in modeling the basin as a set of right rectangular prisms with prescribed density contrast, whose depths are the unknowns. Then, the problem is iteratively solved via local optimization techniques from an initial model computed using some simplifications or being estimated using prior geophysical models. Nevertheless, this kind of approach is highly dependent on the prior information that is used, and lacks from a correct solution appraisal (nonlinear uncertainty analysis). In this paper, we use the family of global Particle Swarm Optimization (PSO) optimizers for the 3D gravity inversion and model appraisal of the solution that is adopted for basement relief estimation in sedimentary basins. Synthetic and real cases are illustrated, showing that robust results are obtained. Therefore, PSO seems to be a very good alternative for 3D gravity inversion and uncertainty assessment of basement relief when used in a sampling while optimizing approach. That way important geological questions can be answered probabilistically in order to perform risk assessment in the decisions that are made.

  4. New Hardness Results for Diophantine Approximation

    NASA Astrophysics Data System (ADS)

    Eisenbrand, Friedrich; Rothvoß, Thomas

    We revisit simultaneous Diophantine approximation, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector α ∈ ℚ n , an error bound ɛ and a denominator bound N ∈ ℕ + . One has to decide whether there exists an integer, called the denominator Q with 1 ≤ Q ≤ N such that the distance of each number Q ·α i to its nearest integer is bounded by ɛ. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor n c/ loglogn for some constant c > 0. We strengthen the existing hardness results and show that the optimization problem of finding the smallest denominator Q ∈ ℕ + such that the distances of Q·α i to the nearest integer are bounded by ɛ is hard to approximate within a factor 2 n unless {textrm{P}} = NP.

  5. Neural networks for feedback feedforward nonlinear control systems.

    PubMed

    Parisini, T; Zoppoli, R

    1994-01-01

    This paper deals with the problem of designing feedback feedforward control strategies to drive the state of a dynamic system (in general, nonlinear) so as to track any desired trajectory joining the points of given compact sets, while minimizing a certain cost function (in general, nonquadratic). Due to the generality of the problem, conventional methods are difficult to apply. Thus, an approximate solution is sought by constraining control strategies to take on the structure of multilayer feedforward neural networks. After discussing the approximation properties of neural control strategies, a particular neural architecture is presented, which is based on what has been called the "linear-structure preserving principle". The original functional problem is then reduced to a nonlinear programming one, and backpropagation is applied to derive the optimal values of the synaptic weights. Recursive equations to compute the gradient components are presented, which generalize the classical adjoint system equations of N-stage optimal control theory. Simulation results related to nonlinear nonquadratic problems show the effectiveness of the proposed method.

  6. An adaptive finite element method for the inequality-constrained Reynolds equation

    NASA Astrophysics Data System (ADS)

    Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha

    2018-07-01

    We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.

  7. Faster than classical quantum algorithm for dense formulas of exact satisfiability and occupation problems

    NASA Astrophysics Data System (ADS)

    Mandrà, Salvatore; Giacomo Guerreschi, Gian; Aspuru-Guzik, Alán

    2016-07-01

    We present an exact quantum algorithm for solving the Exact Satisfiability problem, which belongs to the important NP-complete complexity class. The algorithm is based on an intuitive approach that can be divided into two parts: the first step consists in the identification and efficient characterization of a restricted subspace that contains all the valid assignments of the Exact Satisfiability; while the second part performs a quantum search in such restricted subspace. The quantum algorithm can be used either to find a valid assignment (or to certify that no solution exists) or to count the total number of valid assignments. The query complexities for the worst-case are respectively bounded by O(\\sqrt{{2}n-{M\\prime }}) and O({2}n-{M\\prime }), where n is the number of variables and {M}\\prime the number of linearly independent clauses. Remarkably, the proposed quantum algorithm results to be faster than any known exact classical algorithm to solve dense formulas of Exact Satisfiability. As a concrete application, we provide the worst-case complexity for the Hamiltonian cycle problem obtained after mapping it to a suitable Occupation problem. Specifically, we show that the time complexity for the proposed quantum algorithm is bounded by O({2}n/4) for 3-regular undirected graphs, where n is the number of nodes. The same worst-case complexity holds for (3,3)-regular bipartite graphs. As a reference, the current best classical algorithm has a (worst-case) running time bounded by O({2}31n/96). Finally, when compared to heuristic techniques for Exact Satisfiability problems, the proposed quantum algorithm is faster than the classical WalkSAT and Adiabatic Quantum Optimization for random instances with a density of constraints close to the satisfiability threshold, the regime in which instances are typically the hardest to solve. The proposed quantum algorithm can be straightforwardly extended to the generalized version of the Exact Satisfiability known as Occupation problem. The general version of the algorithm is presented and analyzed.

  8. Can hydro-economic river basin models simulate water shadow prices under asymmetric access?

    PubMed

    Kuhn, A; Britz, W

    2012-01-01

    Hydro-economic river basin models (HERBM) based on mathematical programming are conventionally formulated as explicit 'aggregate optimization' problems with a single, aggregate objective function. Often unintended, this format implicitly assumes that decisions on water allocation are made via central planning or functioning markets such as to maximize social welfare. In the absence of perfect water markets, however, individually optimal decisions by water users will differ from the social optimum. Classical aggregate HERBMs cannot simulate that situation and thus might be unable to describe existing institutions governing access to water and might produce biased results for alternative ones. We propose a new solution format for HERBMs, based on the format of the mixed complementarity problem (MCP), where modified shadow price relations express spatial externalities resulting from asymmetric access to water use. This new problem format, as opposed to commonly used linear (LP) or non-linear programming (NLP) approaches, enables the simultaneous simulation of numerous 'independent optimization' decisions by multiple water users while maintaining physical interdependences based on water use and flow in the river basin. We show that the alternative problem format allows the formulation HERBMs that yield more realistic results when comparing different water management institutions.

  9. An algorithm for the optimal collection of wet waste.

    PubMed

    Laureri, Federica; Minciardi, Riccardo; Robba, Michela

    2016-02-01

    This work refers to the development of an approach for planning wet waste (food waste and other) collection at a metropolitan scale. Some specific modeling features distinguish this specific waste collection problem from the other ones. For instance, there may be significant differences as regards the values of the parameters (such as weight and volume) characterizing the various collection points. As it happens for classical waste collection planning, even in the case of wet waste, one has to deal with difficult combinatorial problems, where the determination of an optimal solution may require a very large computational effort, in the case of problem instances having a noticeable dimensionality. For this reason, in this work, a heuristic procedure for the optimal planning of wet waste is developed and applied to problem instances drawn from a real case study. The performances that can be obtained by applying such a procedure are evaluated by a comparison with those obtainable via a general-purpose mathematical programming software package, as well as those obtained by applying very simple decision rules commonly used in practice. The considered case study consists in an area corresponding to the historical center of the Municipality of Genoa. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Coherent state coding approaches the capacity of non-Gaussian bosonic channels

    NASA Astrophysics Data System (ADS)

    Huber, Stefan; König, Robert

    2018-05-01

    The additivity problem asks if the use of entanglement can boost the information-carrying capacity of a given channel beyond what is achievable by coding with simple product states only. This has recently been shown not to be the case for phase-insensitive one-mode Gaussian channels, but remains unresolved in general. Here we consider two general classes of bosonic noise channels, which include phase-insensitive Gaussian channels as special cases: these are attenuators with general, potentially non-Gaussian environment states and classical noise channels with general probabilistic noise. We show that additivity violations, if existent, are rather minor for all these channels: the maximal gain in classical capacity is bounded by a constant independent of the input energy. Our proof shows that coding by simple classical modulation of coherent states is close to optimal.

  11. A neural network strategy for end-point optimization of batch processes.

    PubMed

    Krothapally, M; Palanki, S

    1999-01-01

    The traditional way of operating batch processes has been to utilize an open-loop "golden recipe". However, there can be substantial batch to batch variation in process conditions and this open-loop strategy can lead to non-optimal operation. In this paper, a new approach is presented for end-point optimization of batch processes by utilizing neural networks. This strategy involves the training of two neural networks; one to predict switching times and the other to predict the input profile in the singular region. This approach alleviates the computational problems associated with the classical Pontryagin's approach and the nonlinear programming approach. The efficacy of this scheme is illustrated via simulation of a fed-batch fermentation.

  12. Entanglement sharing via qudit channels: Nonmaximally entangled states may be necessary for one-shot optimal singlet fraction and negativity

    NASA Astrophysics Data System (ADS)

    Pal, Rajarshi; Bandyopadhyay, Somshubhro

    2018-03-01

    We consider the problem of establishing entangled states of optimal singlet fraction and negativity between two remote parties for every use of a noisy quantum channel and trace-preserving local operations and classical communication (LOCC) under the assumption that the parties do not share prior correlations. We show that for a family of quantum channels in every finite dimension d ≥3 , one-shot optimal singlet fraction and entanglement negativity are attained only with appropriate nonmaximally entangled states. A consequence of our results is that the ordering of entangled states in all finite dimensions may not be preserved under trace-preserving LOCC.

  13. Intelligent Distributed Systems

    DTIC Science & Technology

    2015-10-23

    periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as

  14. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  15. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  16. Computational Role of Tunneling in a Programmable Quantum Annealer

    NASA Technical Reports Server (NTRS)

    Boixo, Sergio; Smelyanskiy, Vadim; Shabani, Alireza; Isakov, Sergei V.; Dykman, Mark; Amin, Mohammad; Mohseni, Masoud; Denchev, Vasil S.; Neven, Hartmut

    2016-01-01

    Quantum tunneling is a phenomenon in which a quantum state tunnels through energy barriers above the energy of the state itself. Tunneling has been hypothesized as an advantageous physical resource for optimization. Here we present the first experimental evidence of a computational role of multiqubit quantum tunneling in the evolution of a programmable quantum annealer. We developed a theoretical model based on a NIBA Quantum Master Equation to describe the multi-qubit dissipative cotunneling effects under the complex noise characteristics of such quantum devices.We start by considering a computational primitive, the simplest non-convex optimization problem consisting of just one global and one local minimum. The quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. In our study the non-convex potentials are realized by frustrated networks of qubit clusters with strong intra-cluster coupling. We show that the collective effect of the quantum environment is suppressed in the critical phase during the evolution where quantum tunneling decides the right path to solution. In a later stage dissipation facilitates the multiqubit cotunneling leading to the solution state. The predictions of the model accurately describe the experimental data from the D-WaveII quantum annealer at NASA Ames. In our computational primitive the temperature dependence of the probability of success in the quantum model is opposite to that of the classical paths with thermal hopping. Specially, we provide an analysis of an optimization problem with sixteen qubits,demonstrating eight qubit cotunneling that increases success probabilities. Furthermore, we report results for larger problems with up to 200 qubits that contain the primitive as subproblems.

  17. Computing quantum discord is NP-complete

    NASA Astrophysics Data System (ADS)

    Huang, Yichen

    2014-03-01

    We study the computational complexity of quantum discord (a measure of quantum correlation beyond entanglement), and prove that computing quantum discord is NP-complete. Therefore, quantum discord is computationally intractable: the running time of any algorithm for computing quantum discord is believed to grow exponentially with the dimension of the Hilbert space so that computing quantum discord in a quantum system of moderate size is not possible in practice. As by-products, some entanglement measures (namely entanglement cost, entanglement of formation, relative entropy of entanglement, squashed entanglement, classical squashed entanglement, conditional entanglement of mutual information, and broadcast regularization of mutual information) and constrained Holevo capacity are NP-hard/NP-complete to compute. These complexity-theoretic results are directly applicable in common randomness distillation, quantum state merging, entanglement distillation, superdense coding, and quantum teleportation; they may offer significant insights into quantum information processing. Moreover, we prove the NP-completeness of two typical problems: linear optimization over classical states and detecting classical states in a convex set, providing evidence that working with classical states is generically computationally intractable.

  18. Computational studies of thermal and quantum phase transitions approached through non-equilibrium quenching

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Wei

    Phase transitions and their associated critical phenomena are of fundamental importance and play a crucial role in the development of statistical physics for both classical and quantum systems. Phase transitions embody diverse aspects of physics and also have numerous applications outside physics, e.g., in chemistry, biology, and combinatorial optimization problems in computer science. Many problems can be reduced to a system consisting of a large number of interacting agents, which under some circumstances (e.g., changes of external parameters) exhibit collective behavior; this type of scenario also underlies phase transitions. The theoretical understanding of equilibrium phase transitions was put on a solid footing with the establishment of the renormalization group. In contrast, non-equilibrium phase transition are relatively less understood and currently a very active research topic. One important milestone here is the Kibble-Zurek (KZ) mechanism, which provides a useful framework for describing a system with a transition point approached through a non-equilibrium quench process. I developed two efficient Monte Carlo techniques for studying phase transitions, one is for classical phase transition and the other is for quantum phase transitions, both are under the framework of KZ scaling. For classical phase transition, I develop a non-equilibrium quench (NEQ) simulation that can completely avoid the critical slowing down problem. For quantum phase transitions, I develop a new algorithm, named quasi-adiabatic quantum Monte Carlo (QAQMC) algorithm for studying quantum quenches. I demonstrate the utility of QAQMC quantum Ising model and obtain high-precision results at the transition point, in particular showing generalized dynamic scaling in the quantum system. To further extend the methods, I study more complex systems such as spin-glasses and random graphs. The techniques allow us to investigate the problems efficiently. From the classical perspective, using the NEQ approach I verify the universality class of the 3D Ising spin-glasses. I also investigate the random 3-regular graphs in terms of both classical and quantum phase transitions. I demonstrate that under this simulation scheme, one can extract information associated with the classical and quantum spin-glass transitions without any knowledge prior to the simulation.

  19. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  20. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm.

    PubMed

    Ghafouri, H R; Mosharaf-Dehkordi, M; Afzalan, B

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Optimal control of a coupled partial and ordinary differential equations system for the assimilation of polarimetry Stokes vector measurements in tokamak free-boundary equilibrium reconstruction with application to ITER

    NASA Astrophysics Data System (ADS)

    Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric

    2017-08-01

    The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.

  2. A chaos wolf optimization algorithm with self-adaptive variable step-size

    NASA Astrophysics Data System (ADS)

    Zhu, Yong; Jiang, Wanlu; Kong, Xiangdong; Quan, Lingxiao; Zhang, Yongshun

    2017-10-01

    To explore the problem of parameter optimization for complex nonlinear function, a chaos wolf optimization algorithm (CWOA) with self-adaptive variable step-size was proposed. The algorithm was based on the swarm intelligence of wolf pack, which fully simulated the predation behavior and prey distribution way of wolves. It possessed three intelligent behaviors such as migration, summons and siege. And the competition rule as "winner-take-all" and the update mechanism as "survival of the fittest" were also the characteristics of the algorithm. Moreover, it combined the strategies of self-adaptive variable step-size search and chaos optimization. The CWOA was utilized in parameter optimization of twelve typical and complex nonlinear functions. And the obtained results were compared with many existing algorithms, including the classical genetic algorithm, the particle swarm optimization algorithm and the leader wolf pack search algorithm. The investigation results indicate that CWOA possess preferable optimization ability. There are advantages in optimization accuracy and convergence rate. Furthermore, it demonstrates high robustness and global searching ability.

  3. Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han Yuecai; Hu Yaozhong; Song Jian, E-mail: jsong2@math.rutgers.edu

    2013-04-15

    We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need tomore » develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.« less

  4. A Fast and Scalable Method for A-Optimal Design of Experiments for Infinite-dimensional Bayesian Nonlinear Inverse Problems with Application to Porous Medium Flow

    NASA Astrophysics Data System (ADS)

    Petra, N.; Alexanderian, A.; Stadler, G.; Ghattas, O.

    2015-12-01

    We address the problem of optimal experimental design (OED) for Bayesian nonlinear inverse problems governed by partial differential equations (PDEs). The inverse problem seeks to infer a parameter field (e.g., the log permeability field in a porous medium flow model problem) from synthetic observations at a set of sensor locations and from the governing PDEs. The goal of the OED problem is to find an optimal placement of sensors so as to minimize the uncertainty in the inferred parameter field. We formulate the OED objective function by generalizing the classical A-optimal experimental design criterion using the expected value of the trace of the posterior covariance. This expected value is computed through sample averaging over the set of likely experimental data. Due to the infinite-dimensional character of the parameter field, we seek an optimization method that solves the OED problem at a cost (measured in the number of forward PDE solves) that is independent of both the parameter and the sensor dimension. To facilitate this goal, we construct a Gaussian approximation to the posterior at the maximum a posteriori probability (MAP) point, and use the resulting covariance operator to define the OED objective function. We use randomized trace estimation to compute the trace of this covariance operator. The resulting OED problem includes as constraints the system of PDEs characterizing the MAP point, and the PDEs describing the action of the covariance (of the Gaussian approximation to the posterior) to vectors. We control the sparsity of the sensor configurations using sparsifying penalty functions, and solve the resulting penalized bilevel optimization problem via an interior-point quasi-Newton method, where gradient information is computed via adjoints. We elaborate our OED method for the problem of determining the optimal sensor configuration to best infer the log permeability field in a porous medium flow problem. Numerical results show that the number of PDE solves required for the evaluation of the OED objective function and its gradient is essentially independent of both the parameter dimension and the sensor dimension (i.e., the number of candidate sensor locations). The number of quasi-Newton iterations for computing an OED also exhibits the same dimension invariance properties.

  5. Stochastic correlative firing for figure-ground segregation.

    PubMed

    Chen, Zhe

    2005-03-01

    Segregation of sensory inputs into separate objects is a central aspect of perception and arises in all sensory modalities. The figure-ground segregation problem requires identifying an object of interest in a complex scene, in many cases given binaural auditory or binocular visual observations. The computations required for visual and auditory figure-ground segregation share many common features and can be cast within a unified framework. Sensory perception can be viewed as a problem of optimizing information transmission. Here we suggest a stochastic correlative firing mechanism and an associative learning rule for figure-ground segregation in several classic sensory perception tasks, including the cocktail party problem in binaural hearing, binocular fusion of stereo images, and Gestalt grouping in motion perception.

  6. Optimal Golomb Ruler Sequences Generation for Optical WDM Systems: A Novel Parallel Hybrid Multi-objective Bat Algorithm

    NASA Astrophysics Data System (ADS)

    Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena

    2017-02-01

    In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.

  7. Optimal Control of Hybrid Systems in Air Traffic Applications

    NASA Astrophysics Data System (ADS)

    Kamgarpour, Maryam

    Growing concerns over the scalability of air traffic operations, air transportation fuel emissions and prices, as well as the advent of communication and sensing technologies motivate improvements to the air traffic management system. To address such improvements, in this thesis a hybrid dynamical model as an abstraction of the air traffic system is considered. Wind and hazardous weather impacts are included using a stochastic model. This thesis focuses on the design of algorithms for verification and control of hybrid and stochastic dynamical systems and the application of these algorithms to air traffic management problems. In the deterministic setting, a numerically efficient algorithm for optimal control of hybrid systems is proposed based on extensions of classical optimal control techniques. This algorithm is applied to optimize the trajectory of an Airbus 320 aircraft in the presence of wind and storms. In the stochastic setting, the verification problem of reaching a target set while avoiding obstacles (reach-avoid) is formulated as a two-player game to account for external agents' influence on system dynamics. The solution approach is applied to air traffic conflict prediction in the presence of stochastic wind. Due to the uncertainty in forecasts of the hazardous weather, and hence the unsafe regions of airspace for aircraft flight, the reach-avoid framework is extended to account for stochastic target and safe sets. This methodology is used to maximize the probability of the safety of aircraft paths through hazardous weather. Finally, the problem of modeling and optimization of arrival air traffic and runway configuration in dense airspace subject to stochastic weather data is addressed. This problem is formulated as a hybrid optimal control problem and is solved with a hierarchical approach that decouples safety and performance. As illustrated with this problem, the large scale of air traffic operations motivates future work on the efficient implementation of the proposed algorithms.

  8. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  9. The Subset Sum game.

    PubMed

    Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim

    2014-03-16

    In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an [Formula: see text]-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem.

  10. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  11. The Subset Sum game☆

    PubMed Central

    Darmann, Andreas; Nicosia, Gaia; Pferschy, Ulrich; Schauer, Joachim

    2014-01-01

    In this work we address a game theoretic variant of the Subset Sum problem, in which two decision makers (agents/players) compete for the usage of a common resource represented by a knapsack capacity. Each agent owns a set of integer weighted items and wants to maximize the total weight of its own items included in the knapsack. The solution is built as follows: Each agent, in turn, selects one of its items (not previously selected) and includes it in the knapsack if there is enough capacity. The process ends when the remaining capacity is too small for including any item left. We look at the problem from a single agent point of view and show that finding an optimal sequence of items to select is an NP-hard problem. Therefore we propose two natural heuristic strategies and analyze their worst-case performance when (1) the opponent is able to play optimally and (2) the opponent adopts a greedy strategy. From a centralized perspective we observe that some known results on the approximation of the classical Subset Sum can be effectively adapted to the multi-agent version of the problem. PMID:25844012

  12. An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Hosseinisianaki, Saghar

    2011-12-01

    This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.

  13. An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Hosseinisianaki, Saghar

    This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.

  14. Trajectory Optimization for Missions to Small Bodies with a Focus on Scientific Merit.

    PubMed

    Englander, Jacob A; Vavrina, Matthew A; Lim, Lucy F; McFadden, Lucy A; Rhoden, Alyssa R; Noll, Keith S

    2017-01-01

    Trajectory design for missions to small bodies is tightly coupled both with the selection of targets for a mission and with the choice of spacecraft power, propulsion, and other hardware. Traditional methods of trajectory optimization have focused on finding the optimal trajectory for an a priori selection of destinations and spacecraft parameters. Recent research has expanded the field of trajectory optimization to multidisciplinary systems optimization that includes spacecraft parameters. The logical next step is to extend the optimization process to include target selection based not only on engineering figures of merit but also scientific value. This paper presents a new technique to solve the multidisciplinary mission optimization problem for small-bodies missions, including classical trajectory design, the choice of spacecraft power and propulsion systems, and also the scientific value of the targets. This technique, when combined with modern parallel computers, enables a holistic view of the small body mission design process that previously required iteration among several different design processes.

  15. Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle

    DOE PAGES

    Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza; ...

    2017-05-18

    We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less

  16. Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza

    We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less

  17. Microseed matrix screening for optimization in protein crystallization: what have we learned?

    PubMed

    D'Arcy, Allan; Bergfors, Terese; Cowan-Jacob, Sandra W; Marsh, May

    2014-09-01

    Protein crystals obtained in initial screens typically require optimization before they are of X-ray diffraction quality. Seeding is one such optimization method. In classical seeding experiments, the seed crystals are put into new, albeit similar, conditions. The past decade has seen the emergence of an alternative seeding strategy: microseed matrix screening (MMS). In this strategy, the seed crystals are transferred into conditions unrelated to the seed source. Examples of MMS applications from in-house projects and the literature include the generation of multiple crystal forms and different space groups, better diffracting crystals and crystallization of previously uncrystallizable targets. MMS can be implemented robotically, making it a viable option for drug-discovery programs. In conclusion, MMS is a simple, time- and cost-efficient optimization method that is applicable to many recalcitrant crystallization problems.

  18. Microseed matrix screening for optimization in protein crystallization: what have we learned?

    PubMed Central

    D’Arcy, Allan; Bergfors, Terese; Cowan-Jacob, Sandra W.; Marsh, May

    2014-01-01

    Protein crystals obtained in initial screens typically require optimization before they are of X-ray diffraction quality. Seeding is one such optimization method. In classical seeding experiments, the seed crystals are put into new, albeit similar, conditions. The past decade has seen the emergence of an alternative seeding strategy: microseed matrix screening (MMS). In this strategy, the seed crystals are transferred into conditions unrelated to the seed source. Examples of MMS applications from in-house projects and the literature include the generation of multiple crystal forms and different space groups, better diffracting crystals and crystallization of previously uncrystallizable targets. MMS can be implemented robotically, making it a viable option for drug-discovery programs. In conclusion, MMS is a simple, time- and cost-efficient optimization method that is applicable to many recalcitrant crystallization problems. PMID:25195878

  19. Optimal resource states for local state discrimination

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Somshubhro; Halder, Saronath; Nathanson, Michael

    2018-02-01

    We study the problem of locally distinguishing pure quantum states using shared entanglement as a resource. For a given set of locally indistinguishable states, we define a resource state to be useful if it can enhance local distinguishability and optimal if it can distinguish the states as well as global measurements and is also minimal with respect to a partial ordering defined by entanglement and dimension. We present examples of useful resources and show that an entangled state need not be useful for distinguishing a given set of states. We obtain optimal resources with explicit local protocols to distinguish multipartite Greenberger-Horne-Zeilinger and graph states and also show that a maximally entangled state is an optimal resource under one-way local operations and classical communication to distinguish any bipartite orthonormal basis which contains at least one entangled state of full Schmidt rank.

  20. Separation-Compliant, Optimal Routing and Control of Scheduled Arrivals in a Terminal Airspace

    NASA Technical Reports Server (NTRS)

    Sadovsky, Alexander V.; Davis, Damek; Isaacson, Douglas R.

    2013-01-01

    We address the problem of navigating a set (fleet) of aircraft in an aerial route network so as to bring each aircraft to its destination at a specified time and with minimal distance separation assured between all aircraft at all times. The speed range, initial position, required destination, and required time of arrival at destination for each aircraft are assumed provided. Each aircraft's movement is governed by a controlled differential equation (state equation). The problem consists in choosing for each aircraft a path in the route network and a control strategy so as to meet the constraints and reach the destination at the required time. The main contribution of the paper is a model that allows to recast this problem as a decoupled collection of problems in classical optimal control and is easily generalized to the case when inertia cannot be neglected. Some qualitative insight into solution behavior is obtained using the Pontryagin Maximum Principle. Sample numerical solutions are computed using a numerical optimal control solver. The proposed model is first step toward increasing the fidelity of continuous time control models of air traffic in a terminal airspace. The Pontryagin Maximum Principle implies the polygonal shape of those portions of the state trajectories away from those states in which one or more aircraft pair are at minimal separation. The model also confirms the intuition that, the narrower the allowed speed ranges of the aircraft, the smaller the space of optimal solutions, and that an instance of the optimal control problem may not have a solution at all (i.e., no control strategy that meets the separation requirement and other constraints).

  1. Integrating NOE and RDC using sum-of-squares relaxation for protein structure determination.

    PubMed

    Khoo, Y; Singer, A; Cowburn, D

    2017-07-01

    We revisit the problem of protein structure determination from geometrical restraints from NMR, using convex optimization. It is well-known that the NP-hard distance geometry problem of determining atomic positions from pairwise distance restraints can be relaxed into a convex semidefinite program (SDP). However, often the NOE distance restraints are too imprecise and sparse for accurate structure determination. Residual dipolar coupling (RDC) measurements provide additional geometric information on the angles between atom-pair directions and axes of the principal-axis-frame. The optimization problem involving RDC is highly non-convex and requires a good initialization even within the simulated annealing framework. In this paper, we model the protein backbone as an articulated structure composed of rigid units. Determining the rotation of each rigid unit gives the full protein structure. We propose solving the non-convex optimization problems using the sum-of-squares (SOS) hierarchy, a hierarchy of convex relaxations with increasing complexity and approximation power. Unlike classical global optimization approaches, SOS optimization returns a certificate of optimality if the global optimum is found. Based on the SOS method, we proposed two algorithms-RDC-SOS and RDC-NOE-SOS, that have polynomial time complexity in the number of amino-acid residues and run efficiently on a standard desktop. In many instances, the proposed methods exactly recover the solution to the original non-convex optimization problem. To the best of our knowledge this is the first time SOS relaxation is introduced to solve non-convex optimization problems in structural biology. We further introduce a statistical tool, the Cramér-Rao bound (CRB), to provide an information theoretic bound on the highest resolution one can hope to achieve when determining protein structure from noisy measurements using any unbiased estimator. Our simulation results show that when the RDC measurements are corrupted by Gaussian noise of realistic variance, both SOS based algorithms attain the CRB. We successfully apply our method in a divide-and-conquer fashion to determine the structure of ubiquitin from experimental NOE and RDC measurements obtained in two alignment media, achieving more accurate and faster reconstructions compared to the current state of the art.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Arno, Michele; ICFO-Institut de Ciencies Fotoniques, E-08860 Castelldefels; Quit Group, Dipartimento di Fisica, via Bassi 6, I-27100 Pavia

    We address the problem of quantum reading of optical memories, namely the retrieving of classical information stored in the optical properties of a media with minimum energy. We present optimal strategies for ambiguous and unambiguous quantum reading of unitary optical memories, namely when one's task is to minimize the probability of errors in the retrieved information and when perfect retrieving of information is achieved probabilistically, respectively. A comparison of the optimal strategy with coherent probes and homodyne detection shows that the former saves orders of magnitude of energy when achieving the same performances. Experimental proposals for quantum reading which aremore » feasible with present quantum optical technology are reported.« less

  3. Practical advantages of evolutionary computation

    NASA Astrophysics Data System (ADS)

    Fogel, David B.

    1997-10-01

    Evolutionary computation is becoming a common technique for solving difficult, real-world problems in industry, medicine, and defense. This paper reviews some of the practical advantages to using evolutionary algorithms as compared with classic methods of optimization or artificial intelligence. Specific advantages include the flexibility of the procedures, as well as their ability to self-adapt the search for optimum solutions on the fly. As desktop computers increase in speed, the application of evolutionary algorithms will become routine.

  4. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Solving a Higgs optimization problem with quantum annealing for machine learning.

    PubMed

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-18

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  6. Solving a Higgs optimization problem with quantum annealing for machine learning

    NASA Astrophysics Data System (ADS)

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-01

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  7. Sequential decision making in computational sustainability via adaptive submodularity

    USGS Publications Warehouse

    Krause, Andreas; Golovin, Daniel; Converse, Sarah J.

    2015-01-01

    Many problems in computational sustainability require making a sequence of decisions in complex, uncertain environments. Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably near-optimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Secondly, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the US Geological Survey and the US Fish and Wildlife Service.

  8. Fair Inference on Outcomes

    PubMed Central

    Nabi, Razieh; Shpitser, Ilya

    2017-01-01

    In this paper, we consider the problem of fair statistical inference involving outcome variables. Examples include classification and regression problems, and estimating treatment effects in randomized trials or observational data. The issue of fairness arises in such problems where some covariates or treatments are “sensitive,” in the sense of having potential of creating discrimination. In this paper, we argue that the presence of discrimination can be formalized in a sensible way as the presence of an effect of a sensitive covariate on the outcome along certain causal pathways, a view which generalizes (Pearl 2009). A fair outcome model can then be learned by solving a constrained optimization problem. We discuss a number of complications that arise in classical statistical inference due to this view and provide workarounds based on recent work in causal and semi-parametric inference.

  9. Application of quasi-distributions for solving inverse problems of neutron and {gamma}-ray transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogosbekyan, L.R.; Lysov, D.A.

    The considered inverse problems deal with the calculation of the unknown values of nuclear installations by means of the known (goal) functionals of neutron/{gamma}-ray distributions. The example of these problems might be the calculation of the automatic control rods position as function of neutron sensors reading, or the calculation of experimentally-corrected values of cross-sections, isotopes concentration, fuel enrichment via the measured functional. The authors have developed the new method to solve inverse problem. It finds flux density as quasi-solution of the particles conservation linear system adjointed to equalities for functionals. The method is more effective compared to the one basedmore » on the classical perturbation theory. It is suitable for vectorization and it can be used successfully in optimization codes.« less

  10. Scenario generation for stochastic optimization problems via the sparse grid method

    DOE PAGES

    Chen, Michael; Mehrotra, Sanjay; Papp, David

    2015-04-19

    We study the use of sparse grids in the scenario generation (or discretization) problem in stochastic programming problems where the uncertainty is modeled using a continuous multivariate distribution. We show that, under a regularity assumption on the random function involved, the sequence of optimal objective function values of the sparse grid approximations converges to the true optimal objective function values as the number of scenarios increases. The rate of convergence is also established. We treat separately the special case when the underlying distribution is an affine transform of a product of univariate distributions, and show how the sparse grid methodmore » can be adapted to the distribution by the use of quadrature formulas tailored to the distribution. We numerically compare the performance of the sparse grid method using different quadrature rules with classic quasi-Monte Carlo (QMC) methods, optimal rank-one lattice rules, and Monte Carlo (MC) scenario generation, using a series of utility maximization problems with up to 160 random variables. The results show that the sparse grid method is very efficient, especially if the integrand is sufficiently smooth. In such problems the sparse grid scenario generation method is found to need several orders of magnitude fewer scenarios than MC and QMC scenario generation to achieve the same accuracy. As a result, it is indicated that the method scales well with the dimension of the distribution--especially when the underlying distribution is an affine transform of a product of univariate distributions, in which case the method appears scalable to thousands of random variables.« less

  11. Least-squares dual characterization for ROI assessment in emission tomography

    NASA Astrophysics Data System (ADS)

    Ben Bouallègue, F.; Crouzet, J. F.; Dubois, A.; Buvat, I.; Mariano-Goulart, D.

    2013-06-01

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff.

  12. Hidden Statistics Approach to Quantum Simulations

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2010-01-01

    Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the transitional potential is to provide a jump from a deterministic state to a random state with prescribed probability density. This jump is triggered by blowup instability due to violation of Lipschitz condition generated by the quantum potential. As a result, the dynamics attains quantum properties on a classical scale. The model can be implemented physically as an analog VLSI-based (very-large-scale integration-based) computer, or numerically on a digital computer. This work opens a way of developing fundamentally new algorithms for quantum simulations of exponentially complex problems that expand NASA capabilities in conducting space activities. It has been illustrated that the complexity of simulations of particle interaction can be reduced from an exponential one to a polynomial one.

  13. A comparison of portfolio selection models via application on ISE 100 index data

    NASA Astrophysics Data System (ADS)

    Altun, Emrah; Tatlidil, Hüseyin

    2013-10-01

    Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.

  14. Reconstruction of local perturbations in periodic surfaces

    NASA Astrophysics Data System (ADS)

    Lechleiter, Armin; Zhang, Ruming

    2018-03-01

    This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.

  15. Improved teaching-learning-based and JAYA optimization algorithms for solving flexible flow shop scheduling problems

    NASA Astrophysics Data System (ADS)

    Buddala, Raviteja; Mahapatra, Siba Sankar

    2017-11-01

    Flexible flow shop (or a hybrid flow shop) scheduling problem is an extension of classical flow shop scheduling problem. In a simple flow shop configuration, a job having `g' operations is performed on `g' operation centres (stages) with each stage having only one machine. If any stage contains more than one machine for providing alternate processing facility, then the problem becomes a flexible flow shop problem (FFSP). FFSP which contains all the complexities involved in a simple flow shop and parallel machine scheduling problems is a well-known NP-hard (Non-deterministic polynomial time) problem. Owing to high computational complexity involved in solving these problems, it is not always possible to obtain an optimal solution in a reasonable computation time. To obtain near-optimal solutions in a reasonable computation time, a large variety of meta-heuristics have been proposed in the past. However, tuning algorithm-specific parameters for solving FFSP is rather tricky and time consuming. To address this limitation, teaching-learning-based optimization (TLBO) and JAYA algorithm are chosen for the study because these are not only recent meta-heuristics but they do not require tuning of algorithm-specific parameters. Although these algorithms seem to be elegant, they lose solution diversity after few iterations and get trapped at the local optima. To alleviate such drawback, a new local search procedure is proposed in this paper to improve the solution quality. Further, mutation strategy (inspired from genetic algorithm) is incorporated in the basic algorithm to maintain solution diversity in the population. Computational experiments have been conducted on standard benchmark problems to calculate makespan and computational time. It is found that the rate of convergence of TLBO is superior to JAYA. From the results, it is found that TLBO and JAYA outperform many algorithms reported in the literature and can be treated as efficient methods for solving the FFSP.

  16. Coherifying quantum channels

    NASA Astrophysics Data System (ADS)

    Korzekwa, Kamil; Czachórski, Stanisław; Puchała, Zbigniew; Życzkowski, Karol

    2018-04-01

    Is it always possible to explain random stochastic transitions between states of a finite-dimensional system as arising from the deterministic quantum evolution of the system? If not, then what is the minimal amount of randomness required by quantum theory to explain a given stochastic process? Here, we address this problem by studying possible coherifications of a quantum channel Φ, i.e., we look for channels {{{Φ }}}{ \\mathcal C } that induce the same classical transitions T, but are ‘more coherent’. To quantify the coherence of a channel Φ we measure the coherence of the corresponding Jamiołkowski state J Φ. We show that the classical transition matrix T can be coherified to reversible unitary dynamics if and only if T is unistochastic. Otherwise the Jamiołkowski state {J}{{Φ }}{ \\mathcal C } of the optimally coherified channel is mixed, and the dynamics must necessarily be irreversible. To assess the extent to which an optimal process {{{Φ }}}{ \\mathcal C } is indeterministic we find explicit bounds on the entropy and purity of {J}{{Φ }}{ \\mathcal C }, and relate the latter to the unitarity of {{{Φ }}}{ \\mathcal C }. We also find optimal coherifications for several classes of channels, including all one-qubit channels. Finally, we provide a non-optimal coherification procedure that works for an arbitrary channel Φ and reduces its rank (the minimal number of required Kraus operators) from {d}2 to d.

  17. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    NASA Astrophysics Data System (ADS)

    Tofighi, Elham; Mahdizadeh, Amin

    2016-09-01

    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  18. History matching through dynamic decision-making

    PubMed Central

    Maschio, Célio; Santos, Antonio Alberto; Schiozer, Denis; Rocha, Anderson

    2017-01-01

    History matching is the process of modifying the uncertain attributes of a reservoir model to reproduce the real reservoir performance. It is a classical reservoir engineering problem and plays an important role in reservoir management since the resulting models are used to support decisions in other tasks such as economic analysis and production strategy. This work introduces a dynamic decision-making optimization framework for history matching problems in which new models are generated based on, and guided by, the dynamic analysis of the data of available solutions. The optimization framework follows a ‘learning-from-data’ approach, and includes two optimizer components that use machine learning techniques, such as unsupervised learning and statistical analysis, to uncover patterns of input attributes that lead to good output responses. These patterns are used to support the decision-making process while generating new, and better, history matched solutions. The proposed framework is applied to a benchmark model (UNISIM-I-H) based on the Namorado field in Brazil. Results show the potential the dynamic decision-making optimization framework has for improving the quality of history matching solutions using a substantial smaller number of simulations when compared with a previous work on the same benchmark. PMID:28582413

  19. Absorbable energy monitoring scheme: new design protocol to test vehicle structural crashworthiness.

    PubMed

    Ofochebe, Sunday M; Enibe, Samuel O; Ozoegwu, Chigbogu G

    2016-05-01

    In vehicle crashworthiness design optimization detailed system evaluation capable of producing reliable results are basically achieved through high-order numerical computational (HNC) models such as the dynamic finite element model, mesh-free model etc. However the application of these models especially during optimization studies is basically challenged by their inherent high demand on computational resources, conditional stability of the solution process, and lack of knowledge of viable parameter range for detailed optimization studies. The absorbable energy monitoring scheme (AEMS) presented in this paper suggests a new design protocol that attempts to overcome such problems in evaluation of vehicle structure for crashworthiness. The implementation of the AEMS involves studying crash performance of vehicle components at various absorbable energy ratios based on a 2DOF lumped-mass-spring (LMS) vehicle impact model. This allows for prompt prediction of useful parameter values in a given design problem. The application of the classical one-dimensional LMS model in vehicle crash analysis is further improved in the present work by developing a critical load matching criterion which allows for quantitative interpretation of the results of the abstract model in a typical vehicle crash design. The adequacy of the proposed AEMS for preliminary vehicle crashworthiness design is demonstrated in this paper, however its extension to full-scale design-optimization problem involving full vehicle model that shows greater structural detail requires more theoretical development.

  20. Modified Discrete Grey Wolf Optimizer Algorithm for Multilevel Image Thresholding

    PubMed Central

    Sun, Lijuan; Guo, Jian; Xu, Bin; Li, Shujing

    2017-01-01

    The computation of image segmentation has become more complicated with the increasing number of thresholds, and the option and application of the thresholds in image thresholding fields have become an NP problem at the same time. The paper puts forward the modified discrete grey wolf optimizer algorithm (MDGWO), which improves on the optimal solution updating mechanism of the search agent by the weights. Taking Kapur's entropy as the optimized function and based on the discreteness of threshold in image segmentation, the paper firstly discretizes the grey wolf optimizer (GWO) and then proposes a new attack strategy by using the weight coefficient to replace the search formula for optimal solution used in the original algorithm. The experimental results show that MDGWO can search out the optimal thresholds efficiently and precisely, which are very close to the result examined by exhaustive searches. In comparison with the electromagnetism optimization (EMO), the differential evolution (DE), the Artifical Bee Colony (ABC), and the classical GWO, it is concluded that MDGWO has advantages over the latter four in terms of image segmentation quality and objective function values and their stability. PMID:28127305

  1. Exact solutions for species tree inference from discordant gene trees.

    PubMed

    Chang, Wen-Chieh; Górecki, Paweł; Eulenstein, Oliver

    2013-10-01

    Phylogenetic analysis has to overcome the grant challenge of inferring accurate species trees from evolutionary histories of gene families (gene trees) that are discordant with the species tree along whose branches they have evolved. Two well studied approaches to cope with this challenge are to solve either biologically informed gene tree parsimony (GTP) problems under gene duplication, gene loss, and deep coalescence, or the classic RF supertree problem that does not rely on any biological model. Despite the potential of these problems to infer credible species trees, they are NP-hard. Therefore, these problems are addressed by heuristics that typically lack any provable accuracy and precision. We describe fast dynamic programming algorithms that solve the GTP problems and the RF supertree problem exactly, and demonstrate that our algorithms can solve instances with data sets consisting of as many as 22 taxa. Extensions of our algorithms can also report the number of all optimal species trees, as well as the trees themselves. To better asses the quality of the resulting species trees that best fit the given gene trees, we also compute the worst case species trees, their numbers, and optimization score for each of the computational problems. Finally, we demonstrate the performance of our exact algorithms using empirical and simulated data sets, and analyze the quality of heuristic solutions for the studied problems by contrasting them with our exact solutions.

  2. Ant system: optimization by a colony of cooperating agents.

    PubMed

    Dorigo, M; Maniezzo, V; Colorni, A

    1996-01-01

    An analogy with the way ant colonies function has suggested the definition of a new computational paradigm, which we call ant system (AS). We propose it as a viable new approach to stochastic combinatorial optimization. The main characteristics of this model are positive feedback, distributed computation, and the use of a constructive greedy heuristic. Positive feedback accounts for rapid discovery of good solutions, distributed computation avoids premature convergence, and the greedy heuristic helps find acceptable solutions in the early stages of the search process. We apply the proposed methodology to the classical traveling salesman problem (TSP), and report simulation results. We also discuss parameter selection and the early setups of the model, and compare it with tabu search and simulated annealing using TSP. To demonstrate the robustness of the approach, we show how the ant system (AS) can be applied to other optimization problems like the asymmetric traveling salesman, the quadratic assignment and the job-shop scheduling. Finally we discuss the salient characteristics-global data structure revision, distributed communication and probabilistic transitions of the AS.

  3. Recourse-based facility-location problems in hybrid uncertain environment.

    PubMed

    Wang, Shuming; Watada, Junzo; Pedrycz, Witold

    2010-08-01

    The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.

  4. Quantum systems as embarrassed colleagues: what do tax evasion and state tomography have in common?

    NASA Astrophysics Data System (ADS)

    Ferrie, Chris; Blume-Kohout, Robin

    2011-03-01

    Quantum state estimation (a.k.a. ``tomography'') plays a key role in designing quantum information processors. As a problem, it resembles probability estimation - e.g. for classical coins or dice - but with some subtle and important discrepancies. We demonstrate an improved classical analogue that captures many of these differences: the ``noisy coin.'' Observations on noisy coins are unreliable - much like soliciting sensitive information such as ones tax preparation habits. So, like a quantum system, it cannot be sampled directly. Unlike standard coins or dice, whose worst-case estimation risk scales as 1 / N for all states, noisy coins (and quantum states) have a worst-case risk that scales as 1 /√{ N } and is overwhelmingly dominated by nearly-pure states. The resulting optimal estimation strategies for noisy coins are surprising and counterintuitive. We demonstrate some important consequences for quantum state estimation - in particular, that adaptive tomography can recover the 1 / N risk scaling of classical probability estimation.

  5. Communication theory of quantum systems. Ph.D. Thesis, 1970

    NASA Technical Reports Server (NTRS)

    Yuen, H. P. H.

    1971-01-01

    Communication theory problems incorporating quantum effects for optical-frequency applications are discussed. Under suitable conditions, a unique quantum channel model corresponding to a given classical space-time varying linear random channel is established. A procedure is described by which a proper density-operator representation applicable to any receiver configuration can be constructed directly from the channel output field. Some examples illustrating the application of our methods to the development of optical quantum channel representations are given. Optimizations of communication system performance under different criteria are considered. In particular, certain necessary and sufficient conditions on the optimal detector in M-ary quantum signal detection are derived. Some examples are presented. Parameter estimation and channel capacity are discussed briefly.

  6. Kalman filter based control for Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Petit, Cyril; Quiros-Pacheco, Fernando; Conan, Jean-Marc; Kulcsár, Caroline; Raynaud, Henri-François; Fusco, Thierry

    2004-12-01

    Classical Adaptive Optics suffer from a limitation of the corrected Field Of View. This drawback has lead to the development of MultiConjugated Adaptive Optics. While the first MCAO experimental set-ups are presently under construction, little attention has been paid to the control loop. This is however a key element in the optimization process especially for MCAO systems. Different approaches have been proposed in recent articles for astronomical applications : simple integrator, Optimized Modal Gain Integrator and Kalman filtering. We study here Kalman filtering which seems a very promising solution. Following the work of Brice Leroux, we focus on a frequential characterization of kalman filters, computing a transfer matrix. The result brings much information about their behaviour and allows comparisons with classical controllers. It also appears that straightforward improvements of the system models can lead to static aberrations and vibrations filtering. Simulation results are proposed and analysed thanks to our frequential characterization. Related problems such as model errors, aliasing effect reduction or experimental implementation and testing of Kalman filter control loop on a simplified MCAO experimental set-up could be then discussed.

  7. Applications of Derandomization Theory in Coding

    NASA Astrophysics Data System (ADS)

    Cheraghchi, Mahdi

    2011-07-01

    Randomized techniques play a fundamental role in theoretical computer science and discrete mathematics, in particular for the design of efficient algorithms and construction of combinatorial objects. The basic goal in derandomization theory is to eliminate or reduce the need for randomness in such randomized constructions. In this thesis, we explore some applications of the fundamental notions in derandomization theory to problems outside the core of theoretical computer science, and in particular, certain problems related to coding theory. First, we consider the wiretap channel problem which involves a communication system in which an intruder can eavesdrop a limited portion of the transmissions, and construct efficient and information-theoretically optimal communication protocols for this model. Then we consider the combinatorial group testing problem. In this classical problem, one aims to determine a set of defective items within a large population by asking a number of queries, where each query reveals whether a defective item is present within a specified group of items. We use randomness condensers to explicitly construct optimal, or nearly optimal, group testing schemes for a setting where the query outcomes can be highly unreliable, as well as the threshold model where a query returns positive if the number of defectives pass a certain threshold. Finally, we design ensembles of error-correcting codes that achieve the information-theoretic capacity of a large class of communication channels, and then use the obtained ensembles for construction of explicit capacity achieving codes. [This is a shortened version of the actual abstract in the thesis.

  8. Distribution-dependent robust linear optimization with applications to inventory control

    PubMed Central

    Kang, Seong-Cheol; Brisimi, Theodora S.

    2014-01-01

    This paper tackles linear programming problems with data uncertainty and applies it to an important inventory control problem. Each element of the constraint matrix is subject to uncertainty and is modeled as a random variable with a bounded support. The classical robust optimization approach to this problem yields a solution with guaranteed feasibility. As this approach tends to be too conservative when applications can tolerate a small chance of infeasibility, one would be interested in obtaining a less conservative solution with a certain probabilistic guarantee of feasibility. A robust formulation in the literature produces such a solution, but it does not use any distributional information on the uncertain data. In this work, we show that the use of distributional information leads to an equally robust solution (i.e., under the same probabilistic guarantee of feasibility) but with a better objective value. In particular, by exploiting distributional information, we establish stronger upper bounds on the constraint violation probability of a solution. These bounds enable us to “inject” less conservatism into the formulation, which in turn yields a more cost-effective solution (by 50% or more in some numerical instances). To illustrate the effectiveness of our methodology, we consider a discrete-time stochastic inventory control problem with certain quality of service constraints. Numerical tests demonstrate that the use of distributional information in the robust optimization of the inventory control problem results in 36%–54% cost savings, compared to the case where such information is not used. PMID:26347579

  9. Performance of Quantum Annealers on Hard Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Pokharel, Bibek; Venturelli, Davide; Rieffel, Eleanor

    Quantum annealers have been employed to attack a variety of optimization problems. We compared the performance of the current D-Wave 2X quantum annealer to that of the previous generation D-Wave Two quantum annealer on scheduling-type planning problems. Further, we compared the effect of different anneal times, embeddings of the logical problem, and different settings of the ferromagnetic coupling JF across the logical vertex-model on the performance of the D-Wave 2X quantum annealer. Our results show that at the best settings, the scaling of expected anneal time to solution for D-WAVE 2X is better than that of the DWave Two, but still inferior to that of state of the art classical solvers on these problems. We discuss the implication of our results for the design and programming of future quantum annealers. Supported by NASA Ames Research Center.

  10. Optimally Stopped Optimization

    NASA Astrophysics Data System (ADS)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  11. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE PAGES

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; ...

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  12. Stochastic optimization of GeantV code by use of genetic algorithms

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  13. Stochastic optimization of GeantV code by use of genetic algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) andmore » handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. Here, the goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.« less

  14. Quantum metrology of spatial deformation using arrays of classical and quantum light emitters

    NASA Astrophysics Data System (ADS)

    Sidhu, Jasminder S.; Kok, Pieter

    2017-06-01

    We introduce spatial deformations to an array of light sources and study how the estimation precision of the interspacing distance d changes with the sources of light used. The quantum Fisher information (QFI) is used as the figure of merit in this work to quantify the amount of information we have on the estimation parameter. We derive the generator of translations G ̂ in d due to an arbitrary homogeneous deformation applied to the array. We show how the variance of the generator can be used to easily consider how different deformations and light sources can effect the estimation precision. The single-parameter estimation problem is applied to the array, and we report on the optimal state that maximizes the QFI for d . Contrary to what may have been expected, the higher average mode occupancies of the classical states performs better in estimating d when compared with single photon emitters (SPEs). The optimal entangled state is constructed from the eigenvectors of the generator and found to outperform all these states. We also find the existence of multiple optimal estimators for the measurement of d . Our results find applications in evaluating stresses and strains, fracture prevention in materials expressing great sensitivities to deformations, and selecting frequency distinguished quantum sources from an array of reference sources.

  15. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  16. Solving the vehicle routing problem by a hybrid meta-heuristic algorithm

    NASA Astrophysics Data System (ADS)

    Yousefikhoshbakht, Majid; Khorram, Esmaile

    2012-08-01

    The vehicle routing problem (VRP) is one of the most important combinational optimization problems that has nowadays received much attention because of its real application in industrial and service problems. The VRP involves routing a fleet of vehicles, each of them visiting a set of nodes such that every node is visited by exactly one vehicle only once. So, the objective is to minimize the total distance traveled by all the vehicles. This paper presents a hybrid two-phase algorithm called sweep algorithm (SW) + ant colony system (ACS) for the classical VRP. At the first stage, the VRP is solved by the SW, and at the second stage, the ACS and 3-opt local search are used for improving the solutions. Extensive computational tests on standard instances from the literature confirm the effectiveness of the presented approach.

  17. Perspective: Memcomputing: Leveraging memory and physics to compute efficiently

    NASA Astrophysics Data System (ADS)

    Di Ventra, Massimiliano; Traversa, Fabio L.

    2018-05-01

    It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs' equations of motion on classical computers have already demonstrated substantial advantages over traditional approaches. We conclude this article by outlining further directions of study.

  18. Reflections on Quantum Data Hiding

    NASA Astrophysics Data System (ADS)

    Winter, Andreas

    Quantum data hiding, originally invented as a limitation on local operations and classical communications (LOCC) in distinguishing globally orthogonal states, is actually a phenomenon arising generically in statistics whenever comparing a `strong' set of measurements (i.e., decision rules) with a `weak' one. The classical statistical analogue of this would be secret sharing, in which two perfectly distinguishable multi-partite hypotheses appear to be indistinguishable when accessing only a marginal. The quantum versions are richer in that for example LOCC allows for state tomography, so the states cannot be come perfectly indistinguishable but only nearly so, and hence the question is one of efficiency. I will discuss two concrete examples and associated sets of problems: 1. Gaussian operations and classical computation (GOCC): Not very surprisingly, GOCC cannot distinguish optimally even two coherent states of a single mode. Here we find states, each a mixture of multi-mode coherent states, which are almost perfectly distinguishable by suitable measurements, by when restricted to GOCC, i.e. linear optics and post-processing, the states appear almost identical. The construction is random and relies on coding arguments. Open questions include whether there one can give a constructive version of the argument, and whether for instance even thermal states can be used, or how efficient the hiding is. 2. Local operation and classical communication (LOCC): It is well-known that in a bipartite dxd-system, asymptotically logd bits can be hidden. Here we show for the first time, using the calculus of min-entropies, that this is asymptotically optimal. In fact, we get bounds on the data hiding capacity of any preparation system; these are however not always tight. While it is known that data hiding by separable states is possible (i.e. the state preparation can be done by LOCC), it is open whether the optimal information efficiency of (asymptotically) log d bits can be achieved by separable states.

  19. Element distinctness revisited

    NASA Astrophysics Data System (ADS)

    Portugal, Renato

    2018-07-01

    The element distinctness problem is the problem of determining whether the elements of a list are distinct, that is, if x=(x_1,\\ldots ,x_N) is a list with N elements, we ask whether the elements of x are distinct or not. The solution in a classical computer requires N queries because it uses sorting to check whether there are equal elements. In the quantum case, it is possible to solve the problem in O(N^{2/3}) queries. There is an extension which asks whether there are k colliding elements, known as element k-distinctness problem. This work obtains optimal values of two critical parameters of Ambainis' seminal quantum algorithm (SIAM J Comput 37(1):210-239, 2007). The first critical parameter is the number of repetitions of the algorithm's main block, which inverts the phase of the marked elements and calls a subroutine. The second parameter is the number of quantum walk steps interlaced by oracle queries. We show that, when the optimal values of the parameters are used, the algorithm's success probability is 1-O(N^{1/(k+1)}), quickly approaching 1. The specification of the exact running time and success probability is important in practical applications of this algorithm.

  20. Quantum algorithms on Walsh transform and Hamming distance for Boolean functions

    NASA Astrophysics Data System (ADS)

    Xie, Zhengwei; Qiu, Daowen; Cai, Guangya

    2018-06-01

    Walsh spectrum or Walsh transform is an alternative description of Boolean functions. In this paper, we explore quantum algorithms to approximate the absolute value of Walsh transform W_f at a single point z0 (i.e., |W_f(z0)|) for n-variable Boolean functions with probability at least 8/π 2 using the number of O(1/|W_f(z_{0)|ɛ }) queries, promised that the accuracy is ɛ , while the best known classical algorithm requires O(2n) queries. The Hamming distance between Boolean functions is used to study the linearity testing and other important problems. We take advantage of Walsh transform to calculate the Hamming distance between two n-variable Boolean functions f and g using O(1) queries in some cases. Then, we exploit another quantum algorithm which converts computing Hamming distance between two Boolean functions to quantum amplitude estimation (i.e., approximate counting). If Ham(f,g)=t≠0, we can approximately compute Ham( f, g) with probability at least 2/3 by combining our algorithm and {Approx-Count(f,ɛ ) algorithm} using the expected number of Θ( √{N/(\\lfloor ɛ t\\rfloor +1)}+√{t(N-t)}/\\lfloor ɛ t\\rfloor +1) queries, promised that the accuracy is ɛ . Moreover, our algorithm is optimal, while the exact query complexity for the above problem is Θ(N) and the query complexity with the accuracy ɛ is O(1/ɛ 2N/(t+1)) in classical algorithm, where N=2n. Finally, we present three exact quantum query algorithms for two promise problems on Hamming distance using O(1) queries, while any classical deterministic algorithm solving the problem uses Ω(2n) queries.

  1. Evolutionary Bi-objective Optimization for Bulldozer and Its Blade in Soil Cutting

    NASA Astrophysics Data System (ADS)

    Sharma, Deepak; Barakat, Nada

    2018-02-01

    An evolutionary optimization approach is adopted in this paper for simultaneously achieving the economic and productive soil cutting. The economic aspect is defined by minimizing the power requirement from the bulldozer, and the soil cutting is made productive by minimizing the time of soil cutting. For determining the power requirement, two force models are adopted from the literature to quantify the cutting force on the blade. Three domain-specific constraints are also proposed, which are limiting the power from the bulldozer, limiting the maximum force on the bulldozer blade and achieving the desired production rate. The bi-objective optimization problem is solved using five benchmark multi-objective evolutionary algorithms and one classical optimization technique using the ɛ-constraint method. The Pareto-optimal solutions are obtained with the knee-region. Further, the post-optimal analysis is performed on the obtained solutions to decipher relationships among the objectives and decision variables. Such relationships are later used for making guidelines for selecting the optimal set of input parameters. The obtained results are then compared with the experiment results from the literature that show a close agreement among them.

  2. Matching nuts and bolts in O(n log n) time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Komlos, J.; Ma, Yuan; Szemeredi, E.

    Given a set of n nuts of distinct widths and a set of n bolts such that each nut corresponds to a unique bolt of the same width, how should we match every nut with its corresponding bolt by comparing nuts with bolts (no comparison is allowed between two nuts or between two bolts)? The problem can be naturally viewed as a variant of the classic sorting problem as follows. Given two lists of n numbers each such that one list is a permutation of the other, how should we sort the lists by comparisons only between numbers in differentmore » lists? We give an O(n log n)-time deterministic algorithm for the problem. This is optimal up to a constant factor and answers an open question posed by Alon, Blum, Fiat, Kannan, Naor, and Ostrovsky. Moreover, when copies of nuts and bolts are allowed, our algorithm runs in optimal O(log n) time on n processors in Valiant`s parallel comparison tree model. Our algorithm is based on the AKS sorting algorithm with substantial modifications.« less

  3. New variational bounds on convective transport. II. Computations and implications

    NASA Astrophysics Data System (ADS)

    Souza, Andre; Tobasco, Ian; Doering, Charles R.

    2016-11-01

    We study the maximal rate of scalar transport between parallel walls separated by distance h, by an incompressible fluid with scalar diffusion coefficient κ. Given velocity vector field u with intensity measured by the Péclet number Pe =h2 < | ∇ u |2 >1/2 / κ (where < . > is space-time average) the challenge is to determine the largest enhancement of wall-to-wall scalar flux over purely diffusive transport, i.e., the Nusselt number Nu . Variational formulations of the problem are studied numerically and optimizing flow fields are computed over a range of Pe . Implications of this optimal wall-to-wall transport problem for the classical problem of Rayleigh-Bénard convection are discussed: the maximal scaling Nu Pe 2 / 3 corresponds, via the identity Pe2 = Ra (Nu - 1) where Ra is the usual Rayleigh number, to Nu Ra 1 / 2 as Ra -> ∞ . Supported in part by National Science Foundation Graduate Research Fellowship DGE-0813964, awards OISE-0967140, PHY-1205219, DMS-1311833, and DMS-1515161, and the John Simon Guggenheim Memorial Foundation.

  4. Task-driven dictionary learning.

    PubMed

    Mairal, Julien; Bach, Francis; Ponce, Jean

    2012-04-01

    Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.

  5. Considering social and environmental concerns as reservoir operating objectives

    NASA Astrophysics Data System (ADS)

    Tilmant, A.; Georis, B.; Doulliez, P.

    2003-04-01

    Sustainability principles are now widely recognized as key criteria for water resource development schemes, such as hydroelectric and multipurpose reservoirs. Development decisions no longer rely solely on economic grounds, but also consider environmental and social concerns through the so-called environmental and social impact assessments. The objective of this paper is to show that environmental and social concerns can also be addressed in the management (operation) of existing or projected reservoir schemes. By either adequately exploiting the results of environmental and social impact assessments, or by carrying out survey of water users, experts and managers, efficient (Pareto optimal) reservoir operating rules can be derived using flexible mathematical programming techniques. By reformulating the problem as a multistage flexible constraint satisfaction problem, incommensurable and subjective operating objectives can contribute, along with classical economic objectives, to the determination of optimal release decisions. Employed in a simulation mode, the results can be used to assess the long-term impacts of various operating rules on the social well-being of affected populations as well as on the integrity of the environment. The methodology is illustrated with a reservoir reallocation problem in Chile.

  6. Structural optimization under overhang constraints imposed by additive manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Allaire, G.; Dapogny, C.; Estevez, R.; Faure, A.; Michailidis, G.

    2017-12-01

    This article addresses one of the major constraints imposed by additive manufacturing processes on shape optimization problems - that of overhangs, i.e. large regions hanging over void without sufficient support from the lower structure. After revisiting the 'classical' geometric criteria used in the literature, based on the angle between the structural boundary and the build direction, we propose a new mechanical constraint functional, which mimics the layer by layer construction process featured by additive manufacturing technologies, and thereby appeals to the physical origin of the difficulties caused by overhangs. This constraint, as well as some variants, is precisely defined; their shape derivatives are computed in the sense of Hadamard's method, and numerical strategies are extensively discussed, in two and three space dimensions, to efficiently deal with the appearance of overhang features in the course of shape optimization processes.

  7. Number-unconstrained quantum sensing

    NASA Astrophysics Data System (ADS)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  8. Completing the Physical Representation of Quantum Algorithms Provides a Quantitative Explanation of Their Computational Speedup

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2018-03-01

    The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete. We complete it in three steps: (i) extending the representation to the process of setting the problem, (ii) relativizing the extended representation to the problem solver to whom the problem setting must be concealed, and (iii) symmetrizing the relativized representation for time reversal to represent the reversibility of the underlying physical process. The third steps projects the input state of the representation, where the problem solver is completely ignorant of the setting and thus the solution of the problem, on one where she knows half solution (half of the information specifying it when the solution is an unstructured bit string). Completing the physical representation shows that the number of computation steps (oracle queries) required to solve any oracle problem in an optimal quantum way should be that of a classical algorithm endowed with the advanced knowledge of half solution.

  9. An optimizing start-up strategy for a bio-methanator.

    PubMed

    Sbarciog, Mihaela; Loccufier, Mia; Vande Wouwer, Alain

    2012-05-01

    This paper presents an optimizing start-up strategy for a bio-methanator. The goal of the control strategy is to maximize the outflow rate of methane in anaerobic digestion processes, which can be described by a two-population model. The methodology relies on a thorough analysis of the system dynamics and involves the solution of two optimization problems: steady-state optimization for determining the optimal operating point and transient optimization. The latter is a classical optimal control problem, which can be solved using the maximum principle of Pontryagin. The proposed control law is of the bang-bang type. The process is driven from an initial state to a small neighborhood of the optimal steady state by switching the manipulated variable (dilution rate) from the minimum to the maximum value at a certain time instant. Then the dilution rate is set to the optimal value and the system settles down in the optimal steady state. This control law ensures the convergence of the system to the optimal steady state and substantially increases its stability region. The region of attraction of the steady state corresponding to maximum production of methane is considerably enlarged. In some cases, which are related to the possibility of selecting the minimum dilution rate below a certain level, the stability region of the optimal steady state equals the interior of the state space. Aside its efficiency, which is evaluated not only in terms of biogas production but also from the perspective of treatment of the organic load, the strategy is also characterized by simplicity, being thus appropriate for implementation in real-life systems. Another important advantage is its generality: this technique may be applied to any anaerobic digestion process, for which the acidogenesis and methanogenesis are, respectively, characterized by Monod and Haldane kinetics.

  10. Vehicle Routing with Three-dimensional Container Loading Constraints—Comparison of Nested and Joint Algorithms

    NASA Astrophysics Data System (ADS)

    Koloch, Grzegorz; Kaminski, Bogumil

    2010-10-01

    In the paper we examine a modification of the classical Vehicle Routing Problem (VRP) in which shapes of transported cargo are accounted for. This problem, known as a three-dimensional VRP with loading constraints (3D-VRP), is appropriate when transported commodities are not perfectly divisible, but they have fixed and heterogeneous dimensions. In the paper restrictions on allowable cargo positionings are also considered. These restrictions are derived from business practice and they extended the baseline 3D-VRP formulation as considered by Koloch and Kaminski (2010). In particular, we investigate how additional restrictions influence relative performance of two proposed optimization algorithms: the nested and the joint one. Performance of both methods is compared on artificial problems and on a big-scale real life case study.

  11. Sparse and stable Markowitz portfolios.

    PubMed

    Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace

    2009-07-28

    We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio.

  12. An integrated optimum design approach for high speed prop rotors

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Mccarthy, Thomas R.

    1995-01-01

    The objective is to develop an optimization procedure for high-speed and civil tilt-rotors by coupling all of the necessary disciplines within a closed-loop optimization procedure. Both simplified and comprehensive analysis codes are used for the aerodynamic analyses. The structural properties are calculated using in-house developed algorithms for both isotropic and composite box beam sections. There are four major objectives of this study. (1) Aerodynamic optimization: The effects of blade aerodynamic characteristics on cruise and hover performance of prop-rotor aircraft are investigated using the classical blade element momentum approach with corrections for the high lift capability of rotors/propellers. (2) Coupled aerodynamic/structures optimization: A multilevel hybrid optimization technique is developed for the design of prop-rotor aircraft. The design problem is decomposed into a level for improved aerodynamics with continuous design variables and a level with discrete variables to investigate composite tailoring. The aerodynamic analysis is based on that developed in objective 1 and the structural analysis is performed using an in-house code which models a composite box beam. The results are compared to both a reference rotor and the optimum rotor found in the purely aerodynamic formulation. (3) Multipoint optimization: The multilevel optimization procedure of objective 2 is extended to a multipoint design problem. Hover, cruise, and take-off are the three flight conditions simultaneously maximized. (4) Coupled rotor/wing optimization: Using the comprehensive rotary wing code CAMRAD, an optimization procedure is developed for the coupled rotor/wing performance in high speed tilt-rotor aircraft. The developed procedure contains design variables which define the rotor and wing planforms.

  13. A formulation of a matrix sparsity approach for the quantum ordered search algorithm

    NASA Astrophysics Data System (ADS)

    Parmar, Jupinder; Rahman, Saarim; Thiara, Jaskaran

    One specific subset of quantum algorithms is Grovers Ordered Search Problem (OSP), the quantum counterpart of the classical binary search algorithm, which utilizes oracle functions to produce a specified value within an ordered database. Classically, the optimal algorithm is known to have a log2N complexity; however, Grovers algorithm has been found to have an optimal complexity between the lower bound of ((lnN-1)/π≈0.221log2N) and the upper bound of 0.433log2N. We sought to lower the known upper bound of the OSP. With Farhi et al. MITCTP 2815 (1999), arXiv:quant-ph/9901059], we see that the OSP can be resolved into a translational invariant algorithm to create quantum query algorithm restraints. With these restraints, one can find Laurent polynomials for various k — queries — and N — database sizes — thus finding larger recursive sets to solve the OSP and effectively reducing the upper bound. These polynomials are found to be convex functions, allowing one to make use of convex optimization to find an improvement on the known bounds. According to Childs et al. [Phys. Rev. A 75 (2007) 032335], semidefinite programming, a subset of convex optimization, can solve the particular problem represented by the constraints. We were able to implement a program abiding to their formulation of a semidefinite program (SDP), leading us to find that it takes an immense amount of storage and time to compute. To combat this setback, we then formulated an approach to improve results of the SDP using matrix sparsity. Through the development of this approach, along with an implementation of a rudimentary solver, we demonstrate how matrix sparsity reduces the amount of time and storage required to compute the SDP — overall ensuring further improvements will likely be made to reach the theorized lower bound.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, F.; Banks, J. W.; Henshaw, W. D.

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  15. Comparative spectral analysis of veterinary powder product by continuous wavelet and derivative transforms

    NASA Astrophysics Data System (ADS)

    Dinç, Erdal; Kanbur, Murat; Baleanu, Dumitru

    2007-10-01

    Comparative simultaneous determination of chlortetracycline and benzocaine in the commercial veterinary powder product was carried out by continuous wavelet transform (CWT) and classical derivative transform (or classical derivative spectrophotometry). In this quantitative spectral analysis, two proposed analytical methods do not require any chemical separation process. In the first step, several wavelet families were tested to find an optimal CWT for the overlapping signal processing of the analyzed compounds. Subsequently, we observed that the coiflets (COIF-CWT) method with dilation parameter, a = 400, gives suitable results for this analytical application. For a comparison, the classical derivative spectrophotometry (CDS) approach was also applied to the simultaneous quantitative resolution of the same analytical problem. Calibration functions were obtained by measuring the transform amplitudes corresponding to zero-crossing points for both CWT and CDS methods. The utility of these two analytical approaches were verified by analyzing various synthetic mixtures consisting of chlortetracycline and benzocaine and they were applied to the real samples consisting of veterinary powder formulation. The experimental results obtained from the COIF-CWT approach were statistically compared with those obtained by classical derivative spectrophotometry and successful results were reported.

  16. Augmented neural networks and problem structure-based heuristics for the bin-packing problem

    NASA Astrophysics Data System (ADS)

    Kasap, Nihat; Agarwal, Anurag

    2012-08-01

    In this article, we report on a research project where we applied augmented-neural-networks (AugNNs) approach for solving the classical bin-packing problem (BPP). AugNN is a metaheuristic that combines a priority rule heuristic with the iterative search approach of neural networks to generate good solutions fast. This is the first time this approach has been applied to the BPP. We also propose a decomposition approach for solving harder BPP, in which subproblems are solved using a combination of AugNN approach and heuristics that exploit the problem structure. We discuss the characteristics of problems on which such problem structure-based heuristics could be applied. We empirically show the effectiveness of the AugNN and the decomposition approach on many benchmark problems in the literature. For the 1210 benchmark problems tested, 917 problems were solved to optimality and the average gap between the obtained solution and the upper bound for all the problems was reduced to under 0.66% and computation time averaged below 33 s per problem. We also discuss the computational complexity of our approach.

  17. Primal-dual techniques for online algorithms and mechanisms

    NASA Astrophysics Data System (ADS)

    Liaghat, Vahid

    An offline algorithm is one that knows the entire input in advance. An online algorithm, however, processes its input in a serial fashion. In contrast to offline algorithms, an online algorithm works in a local fashion and has to make irrevocable decisions without having the entire input. Online algorithms are often not optimal since their irrevocable decisions may turn out to be inefficient after receiving the rest of the input. For a given online problem, the goal is to design algorithms which are competitive against the offline optimal solutions. In a classical offline scenario, it is often common to see a dual analysis of problems that can be formulated as a linear or convex program. Primal-dual and dual-fitting techniques have been successfully applied to many such problems. Unfortunately, the usual tricks come short in an online setting since an online algorithm should make decisions without knowing even the whole program. In this thesis, we study the competitive analysis of fundamental problems in the literature such as different variants of online matching and online Steiner connectivity, via online dual techniques. Although there are many generic tools for solving an optimization problem in the offline paradigm, in comparison, much less is known for tackling online problems. The main focus of this work is to design generic techniques for solving integral linear optimization problems where the solution space is restricted via a set of linear constraints. A general family of these problems are online packing/covering problems. Our work shows that for several seemingly unrelated problems, primal-dual techniques can be successfully applied as a unifying approach for analyzing these problems. We believe this leads to generic algorithmic frameworks for solving online problems. In the first part of the thesis, we show the effectiveness of our techniques in the stochastic settings and their applications in Bayesian mechanism design. In particular, we introduce new techniques for solving a fundamental linear optimization problem, namely, the stochastic generalized assignment problem (GAP). This packing problem generalizes various problems such as online matching, ad allocation, bin packing, etc. We furthermore show applications of such results in the mechanism design by introducing Prophet Secretary, a novel Bayesian model for online auctions. In the second part of the thesis, we focus on the covering problems. We develop the framework of "Disk Painting" for a general class of network design problems that can be characterized by proper functions. This class generalizes the node-weighted and edge-weighted variants of several well-known Steiner connectivity problems. We furthermore design a generic technique for solving the prize-collecting variants of these problems when there exists a dual analysis for the non-prize-collecting counterparts. Hence, we solve the online prize-collecting variants of several network design problems for the first time. Finally we focus on designing techniques for online problems with mixed packing/covering constraints. We initiate the study of degree-bounded graph optimization problems in the online setting by designing an online algorithm with a tight competitive ratio for the degree-bounded Steiner forest problem. We hope these techniques establishes a starting point for the analysis of the important class of online degree-bounded optimization on graphs.

  18. Risk-aware multi-armed bandit problem with application to portfolio selection

    PubMed Central

    Huo, Xiaoguang

    2017-01-01

    Sequential portfolio selection has attracted increasing interest in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision-making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of the financial market and combining the optimal multi-armed bandit policy with the minimization of a coherent risk measure, we achieve a balance between risk and return. PMID:29291122

  19. Risk-aware multi-armed bandit problem with application to portfolio selection.

    PubMed

    Huo, Xiaoguang; Fu, Feng

    2017-11-01

    Sequential portfolio selection has attracted increasing interest in the machine learning and quantitative finance communities in recent years. As a mathematical framework for reinforcement learning policies, the stochastic multi-armed bandit problem addresses the primary difficulty in sequential decision-making under uncertainty, namely the exploration versus exploitation dilemma, and therefore provides a natural connection to portfolio selection. In this paper, we incorporate risk awareness into the classic multi-armed bandit setting and introduce an algorithm to construct portfolio. Through filtering assets based on the topological structure of the financial market and combining the optimal multi-armed bandit policy with the minimization of a coherent risk measure, we achieve a balance between risk and return.

  20. On the Impact of Local Taxes in a Set Cover Game

    NASA Astrophysics Data System (ADS)

    Escoffier, Bruno; Gourvès, Laurent; Monnot, Jérôme

    Given a collection C of weighted subsets of a ground set E, the SET cover problem is to find a minimum weight subset of C which covers all elements of E. We study a strategic game defined upon this classical optimization problem. Every element of E is a player which chooses one set of C where it appears. Following a public tax function, every player is charged a fraction of the weight of the set that it has selected. Our motivation is to design a tax function having the following features: it can be implemented in a distributed manner, existence of an equilibrium is guaranteed and the social cost for these equilibria is minimized.

  1. Identification of immiscible NAPL contaminant sources in aquifers by a modified two-level saturation based imperialist competitive algorithm

    NASA Astrophysics Data System (ADS)

    Ghafouri, H. R.; Mosharaf-Dehkordi, M.; Afzalan, B.

    2017-07-01

    A simulation-optimization model is proposed for identifying the characteristics of local immiscible NAPL contaminant sources inside aquifers. This model employs the UTCHEM 9.0 software as its simulator for solving the governing equations associated with the multi-phase flow in porous media. As the optimization model, a novel two-level saturation based Imperialist Competitive Algorithm (ICA) is proposed to estimate the parameters of contaminant sources. The first level consists of three parallel independent ICAs and plays as a pre-conditioner for the second level which is a single modified ICA. The ICA in the second level is modified by dividing each country into a number of provinces (smaller parts). Similar to countries in the classical ICA, these provinces are optimized by the assimilation, competition, and revolution steps in the ICA. To increase the diversity of populations, a new approach named knock the base method is proposed. The performance and accuracy of the simulation-optimization model is assessed by solving a set of two and three-dimensional problems considering the effects of different parameters such as the grid size, rock heterogeneity and designated monitoring networks. The obtained numerical results indicate that using this simulation-optimization model provides accurate results at a less number of iterations when compared with the model employing the classical one-level ICA. A model is proposed to identify characteristics of immiscible NAPL contaminant sources. The contaminant is immiscible in water and multi-phase flow is simulated. The model is a multi-level saturation-based optimization algorithm based on ICA. Each answer string in second level is divided into a set of provinces. Each ICA is modified by incorporating a new knock the base model.

  2. Optimal Force Control of Vibro-Impact Systems for Autonomous Drilling Applications

    NASA Technical Reports Server (NTRS)

    Aldrich, Jack B.; Okon, Avi B.

    2012-01-01

    The need to maintain optimal energy efficiency is critical during the drilling operations performed on future and current planetary rover missions (see figure). Specifically, this innovation seeks to solve the following problem. Given a spring-loaded percussive drill driven by a voice-coil motor, one needs to determine the optimal input voltage waveform (periodic function) and the optimal hammering period that minimizes the dissipated energy, while ensuring that the hammer-to-rock impacts are made with sufficient (user-defined) impact velocity (or impact energy). To solve this problem, it was first observed that when voice-coil-actuated percussive drills are driven at high power, it is of paramount importance to ensure that the electrical current of the device remains in phase with the velocity of the hammer. Otherwise, negative work is performed and the drill experiences a loss of performance (i.e., reduced impact energy) and an increase in Joule heating (i.e., reduction in energy efficiency). This observation has motivated many drilling products to incorporate the standard bang-bang control approach for driving their percussive drills. However, the bang-bang control approach is significantly less efficient than the optimal energy-efficient control approach solved herein. To obtain this solution, the standard tools of classical optimal control theory were applied. It is worth noting that these tools inherently require the solution of a two-point boundary value problem (TPBVP), i.e., a system of differential equations where half the equations have unknown boundary conditions. Typically, the TPBVP is impossible to solve analytically for high-dimensional dynamic systems. However, for the case of the spring-loaded vibro-impactor, this approach yields the exact optimal control solution as the sum of four analytic functions whose coefficients are determined using a simple, easy-to-implement algorithm. Once the optimal control waveform is determined, it can be used optimally in the context of both open-loop and closed-loop control modes (using standard realtime control hardware).

  3. Quantum Hamilton equations of motion for bound states of one-dimensional quantum systems

    NASA Astrophysics Data System (ADS)

    Köppe, J.; Patzold, M.; Grecksch, W.; Paul, W.

    2018-06-01

    On the basis of Nelson's stochastic mechanics derivation of the Schrödinger equation, a formal mathematical structure of non-relativistic quantum mechanics equivalent to the one in classical analytical mechanics has been established in the literature. We recently were able to augment this structure by deriving quantum Hamilton equations of motion by finding the Nash equilibrium of a stochastic optimal control problem, which is the generalization of Hamilton's principle of classical mechanics to quantum systems. We showed that these equations allow a description and numerical determination of the ground state of quantum problems without using the Schrödinger equation. We extend this approach here to deliver the complete discrete energy spectrum and related eigenfunctions for bound states of one-dimensional stationary quantum systems. We exemplify this analytically for the one-dimensional harmonic oscillator and numerically by analyzing a quartic double-well potential, a model of broad importance in many areas of physics. We furthermore point out a relation between the tunnel splitting of such models and mean first passage time concepts applied to Nelson's diffusion paths in the ground state.

  4. Application of an Evolution Strategy in Planetary Ephemeris Optimization

    NASA Astrophysics Data System (ADS)

    Mai, E.

    2016-12-01

    Classical planetary ephemeris construction comprises three major steps, which are performed iteratively: simultaneous numerical integration of coupled equations of motion of a multi-body system (propagator step), reduction of thousands of observations (reduction step), and optimization of various selected model parameters (adjustment step). This traditional approach is challenged by ongoing refinements in force modeling, e.g. inclusion of much more significant minor bodies, an ever-growing number of planetary observations, e.g. vast amount of spacecraft tracking data, etc. To master the high computational burden and in order to circumvent the need for inversion of huge normal equation matrices, we propose an alternative ephemeris construction method. The main idea is to solve the overall optimization problem by a straightforward direct evaluation of the whole set of mathematical formulas involved, rather than to solve it as an inverse problem with all its tacit mathematical assumptions and numerical difficulties. We replace the usual gradient search by a stochastic search, namely an evolution strategy, the latter of which is also perfect for the exploitation of parallel computing capabilities. Furthermore, this new approach enables multi-criteria optimization and time-varying optima. This issue will become important in future once ephemeris construction is just one part of even larger optimization problems, e.g. the combined and consistent determination of the physical state (orbit, size, shape, rotation, gravity,…) of celestial bodies (planets, satellites, asteroids, or comets), and if one seeks near real-time solutions. Here we outline the general idea and discuss first results. As an example, we present a simultaneous optimization of high-correlated asteroidal ring model parameters (total mass and heliocentric radius), based on simulations.

  5. Numerical methods for coupled fracture problems

    NASA Astrophysics Data System (ADS)

    Viesca, Robert C.; Garagash, Dmitry I.

    2018-04-01

    We consider numerical solutions in which the linear elastic response to an opening- or sliding-mode fracture couples with one or more processes. Classic examples of such problems include traction-free cracks leading to stress singularities or cracks with cohesive-zone strength requirements leading to non-singular stress distributions. These classical problems have characteristic square-root asymptotic behavior for stress, relative displacement, or their derivatives. Prior work has shown that such asymptotics lead to a natural quadrature of the singular integrals at roots of Chebyhsev polynomials of the first, second, third, or fourth kind. We show that such quadratures lead to convenient techniques for interpolation, differentiation, and integration, with the potential for spectral accuracy. We further show that these techniques, with slight amendment, may continue to be used for non-classical problems which lack the classical asymptotic behavior. We consider solutions to example problems of both the classical and non-classical variety (e.g., fluid-driven opening-mode fracture and fault shear rupture driven by thermal weakening), with comparisons to analytical solutions or asymptotes, where available.

  6. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints.

    PubMed

    van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W

    2014-12-22

    Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC <0.01). We found that a stable AUC was reached by LR at approximately 20 to 50 events per variable, followed by CART, SVM, NN and RF models. Optimism decreased with increasing sample sizes and the same ranking of techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.

  7. A New TS Algorithm for Solving Low-Carbon Logistics Vehicle Routing Problem with Split Deliveries by Backpack-From a Green Operation Perspective.

    PubMed

    Xia, Yangkun; Fu, Zhuo; Tsai, Sang-Bing; Wang, Jiangtao

    2018-05-10

    In order to promote the development of low-carbon logistics and economize logistics distribution costs, the vehicle routing problem with split deliveries by backpack is studied. With the help of the model of classical capacitated vehicle routing problem, in this study, a form of discrete split deliveries was designed in which the customer demand can be split only by backpack. A double-objective mathematical model and the corresponding adaptive tabu search (TS) algorithm were constructed for solving this problem. By embedding the adaptive penalty mechanism, and adopting the random neighborhood selection strategy and reinitialization principle, the global optimization ability of the new algorithm was enhanced. Comparisons with the results in the literature show the effectiveness of the proposed algorithm. The proposed method can save the costs of low-carbon logistics and reduce carbon emissions, which is conducive to the sustainable development of low-carbon logistics.

  8. Quantum adiabatic machine learning

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen L.; Lidar, Daniel A.

    2013-05-01

    We develop an approach to machine learning and anomaly detection via quantum adiabatic evolution. This approach consists of two quantum phases, with some amount of classical preprocessing to set up the quantum problems. In the training phase we identify an optimal set of weak classifiers, to form a single strong classifier. In the testing phase we adiabatically evolve one or more strong classifiers on a superposition of inputs in order to find certain anomalous elements in the classification space. Both the training and testing phases are executed via quantum adiabatic evolution. All quantum processing is strictly limited to two-qubit interactions so as to ensure physical feasibility. We apply and illustrate this approach in detail to the problem of software verification and validation, with a specific example of the learning phase applied to a problem of interest in flight control systems. Beyond this example, the algorithm can be used to attack a broad class of anomaly detection problems.

  9. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  10. Automatically Generated Algorithms for the Vertex Coloring Problem

    PubMed Central

    Contreras Bolton, Carlos; Gatica, Gustavo; Parada, Víctor

    2013-01-01

    The vertex coloring problem is a classical problem in combinatorial optimization that consists of assigning a color to each vertex of a graph such that no adjacent vertices share the same color, minimizing the number of colors used. Despite the various practical applications that exist for this problem, its NP-hardness still represents a computational challenge. Some of the best computational results obtained for this problem are consequences of hybridizing the various known heuristics. Automatically revising the space constituted by combining these techniques to find the most adequate combination has received less attention. In this paper, we propose exploring the heuristics space for the vertex coloring problem using evolutionary algorithms. We automatically generate three new algorithms by combining elementary heuristics. To evaluate the new algorithms, a computational experiment was performed that allowed comparing them numerically with existing heuristics. The obtained algorithms present an average 29.97% relative error, while four other heuristics selected from the literature present a 59.73% error, considering 29 of the more difficult instances in the DIMACS benchmark. PMID:23516506

  11. Scenario-based modeling for multiple allocation hub location problem under disruption risk: multiple cuts Benders decomposition approach

    NASA Astrophysics Data System (ADS)

    Yahyaei, Mohsen; Bashiri, Mahdi

    2017-12-01

    The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.

  12. Quid pro quo: a mechanism for fair collaboration in networked systems.

    PubMed

    Santos, Agustín; Fernández Anta, Antonio; López Fernández, Luis

    2013-01-01

    Collaboration may be understood as the execution of coordinated tasks (in the most general sense) by groups of users, who cooperate for achieving a common goal. Collaboration is a fundamental assumption and requirement for the correct operation of many communication systems. The main challenge when creating collaborative systems in a decentralized manner is dealing with the fact that users may behave in selfish ways, trying to obtain the benefits of the tasks but without participating in their execution. In this context, Game Theory has been instrumental to model collaborative systems and the task allocation problem, and to design mechanisms for optimal allocation of tasks. In this paper, we revise the classical assumptions of these models and propose a new approach to this problem. First, we establish a system model based on heterogenous nodes (users, players), and propose a basic distributed mechanism so that, when a new task appears, it is assigned to the most suitable node. The classical technique for compensating a node that executes a task is the use of payments (which in most networks are hard or impossible to implement). Instead, we propose a distributed mechanism for the optimal allocation of tasks without payments. We prove this mechanism to be robust evenevent in the presence of independent selfish or rationally limited players. Additionally, our model is based on very weak assumptions, which makes the proposed mechanisms susceptible to be implemented in networked systems (e.g., the Internet).

  13. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  14. Hole size, location optimization in a plate and cylindrical shell for minimum stress points interfacing ANSYS and MATLAB

    NASA Astrophysics Data System (ADS)

    Thangavel, Soundararaj

    Discontinuities in Structures are inevitable. One such discontinuity in a plate and cylindrical shell is presence of a hole / holes. In Plates they are used for mounting bolts where as in Cylinder / Pressure Vessel, they provide provision for mounting Nozzles / Instruments. Location of these holes plays a primary role in minimizing the stress acting with out any external reinforcement. In this Thesis work, Location Parameters are optimized for the presence of one or more holes in a plate and cylindrical shell interfacing ANSYS and MATLAB with boundary constraints based on the geometry. Contour plots are generated for understanding stress distribution and analytical solutions are also discussed for some of the classical problems.

  15. Factorization and the synthesis of optimal feedback kernels for differential-delay systems

    NASA Technical Reports Server (NTRS)

    Milman, Mark M.; Scheid, Robert E.

    1987-01-01

    A combination of ideas from the theories of operator Riccati equations and Volterra factorizations leads to the derivation of a novel, relatively simple set of hyperbolic equations which characterize the optimal feedback kernel for the finite-time regulator problem for autonomous differential-delay systems. Analysis of these equations elucidates the underlying structure of the feedback kernel and leads to the development of fast and accurate numerical methods for its computation. Unlike traditional formulations based on the operator Riccati equation, the gain is characterized by means of classical solutions of the derived set of equations. This leads to the development of approximation schemes which are analogous to what has been accomplished for systems of ordinary differential equations with given initial conditions.

  16. Insight into efficient image registration techniques and the demons algorithm.

    PubMed

    Vercauteren, Tom; Pennec, Xavier; Malis, Ezio; Perchant, Aymeric; Ayache, Nicholas

    2007-01-01

    As image registration becomes more and more central to many biomedical imaging applications, the efficiency of the algorithms becomes a key issue. Image registration is classically performed by optimizing a similarity criterion over a given spatial transformation space. Even if this problem is considered as almost solved for linear registration, we show in this paper that some tools that have recently been developed in the field of vision-based robot control can outperform classical solutions. The adequacy of these tools for linear image registration leads us to revisit non-linear registration and allows us to provide interesting theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage to the symmetric forces variant of the demons algorithm. We show that, on controlled experiments, this advantage is confirmed, and yields a faster convergence.

  17. Efficient quantum transmission in multiple-source networks.

    PubMed

    Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-04-02

    A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency.

  18. A power allocation method for 2 × 2 VLC-MIMO indoor communication

    NASA Astrophysics Data System (ADS)

    Dai, Mingjun; Yuan, Jing; Feng, Renhai; Wang, Hui; Chen, Bin; Lin, Xiaohui

    2016-08-01

    Visible light communication (VLC) has been a promising field of optical communications which focuses on visible light spectrum that humans can see. Unlike existing studies which mainly discuss point-to-point communication, in this paper, we consider a VLC network, in particular a 2 × 2 system. Our focus is on dealing with interference in this network. The objective is to maximize the signal to interference plus noise ratio (SINR) of one receiver for a given SINR of another receiver. We formulate a power allocation optimization problem to deal with such interference, and introduce dichotomy to solve this optimization problem. Simulation results have twofold meaning: First, SINR_1 increases with the growth of SINR_2, which are the SINR of the two receivers, respectively. Second, our proposed scheme outperforms the classical time-division multiple access technique in terms of transmit powers of both light sources when the data rate for these two schemes are set to be identical for each user, respectively.

  19. Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset

    NASA Astrophysics Data System (ADS)

    Liu, Qiaoyuan; Wang, Yuru; Yin, Minghao; Ren, Jinchang; Li, Ruizhi

    2017-11-01

    Although various visual tracking algorithms have been proposed in the last 2-3 decades, it remains a challenging problem for effective tracking with fast motion, deformation, occlusion, etc. Under complex tracking conditions, most tracking models are not discriminative and adaptive enough. When the combined feature vectors are inputted to the visual models, this may lead to redundancy causing low efficiency and ambiguity causing poor performance. An effective tracking algorithm is proposed to decontaminate features for each video sequence adaptively, where the visual modeling is treated as an optimization problem from the perspective of evolution. Every feature vector is compared to a biological individual and then decontaminated via classical evolutionary algorithms. With the optimized subsets of features, the "curse of dimensionality" has been avoided while the accuracy of the visual model has been improved. The proposed algorithm has been tested on several publicly available datasets with various tracking challenges and benchmarked with a number of state-of-the-art approaches. The comprehensive experiments have demonstrated the efficacy of the proposed methodology.

  20. Weighted interior penalty discretization of fully nonlinear and weakly dispersive free surface shallow water flows

    NASA Astrophysics Data System (ADS)

    Di Pietro, Daniele A.; Marche, Fabien

    2018-02-01

    In this paper, we further investigate the use of a fully discontinuous Finite Element discrete formulation for the study of shallow water free surface flows in the fully nonlinear and weakly dispersive flow regime. We consider a decoupling strategy in which we approximate the solutions of the classical shallow water equations supplemented with a source term globally accounting for the non-hydrostatic effects. This source term can be computed through the resolution of elliptic second-order linear sub-problems, which only involve second order partial derivatives in space. We then introduce an associated Symmetric Weighted Internal Penalty discrete bilinear form, allowing to deal with the discontinuous nature of the elliptic problem's coefficients in a stable and consistent way. Similar discrete formulations are also introduced for several recent optimized fully nonlinear and weakly dispersive models. These formulations are validated again several benchmarks involving h-convergence, p-convergence and comparisons with experimental data, showing optimal convergence properties.

  1. A traveling salesman approach for predicting protein functions.

    PubMed

    Johnson, Olin; Liu, Jing

    2006-10-12

    Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways. Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae. We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm 1 on characterized proteins and compare the prediction accuracy of the two methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein. Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study biological problems.

  2. A traveling salesman approach for predicting protein functions

    PubMed Central

    Johnson, Olin; Liu, Jing

    2006-01-01

    Background Protein-protein interaction information can be used to predict unknown protein functions and to help study biological pathways. Results Here we present a new approach utilizing the classic Traveling Salesman Problem to study the protein-protein interactions and to predict protein functions in budding yeast Saccharomyces cerevisiae. We apply the global optimization tool from combinatorial optimization algorithms to cluster the yeast proteins based on the global protein interaction information. We then use this clustering information to help us predict protein functions. We use our algorithm together with the direct neighbor algorithm [1] on characterized proteins and compare the prediction accuracy of the two methods. We show our algorithm can produce better predictions than the direct neighbor algorithm, which only considers the immediate neighbors of the query protein. Conclusion Our method is a promising one to be used as a general tool to predict functions of uncharacterized proteins and a successful sample of using computer science knowledge and algorithms to study biological problems. PMID:17147783

  3. Using the Firefly optimization method to weight an ensemble of rainfall forecasts from the Brazilian developments on the Regional Atmospheric Modeling System (BRAMS)

    NASA Astrophysics Data System (ADS)

    dos Santos, A. F.; Freitas, S. R.; de Mattos, J. G. Z.; de Campos Velho, H. F.; Gan, M. A.; da Luz, E. F. P.; Grell, G. A.

    2013-09-01

    In this paper we consider an optimization problem applying the metaheuristic Firefly algorithm (FY) to weight an ensemble of rainfall forecasts from daily precipitation simulations with the Brazilian developments on the Regional Atmospheric Modeling System (BRAMS) over South America during January 2006. The method is addressed as a parameter estimation problem to weight the ensemble of precipitation forecasts carried out using different options of the convective parameterization scheme. Ensemble simulations were performed using different choices of closures, representing different formulations of dynamic control (the modulation of convection by the environment) in a deep convection scheme. The optimization problem is solved as an inverse problem of parameter estimation. The application and validation of the methodology is carried out using daily precipitation fields, defined over South America and obtained by merging remote sensing estimations with rain gauge observations. The quadratic difference between the model and observed data was used as the objective function to determine the best combination of the ensemble members to reproduce the observations. To reduce the model rainfall biases, the set of weights determined by the algorithm is used to weight members of an ensemble of model simulations in order to compute a new precipitation field that represents the observed precipitation as closely as possible. The validation of the methodology is carried out using classical statistical scores. The algorithm has produced the best combination of the weights, resulting in a new precipitation field closest to the observations.

  4. Quantum annealing with parametrically driven nonlinear oscillators

    NASA Astrophysics Data System (ADS)

    Puri, Shruti

    While progress has been made towards building Ising machines to solve hard combinatorial optimization problems, quantum speedups have so far been elusive. Furthermore, protecting annealers against decoherence and achieving long-range connectivity remain important outstanding challenges. With the hope of overcoming these challenges, I introduce a new paradigm for quantum annealing that relies on continuous variable states. Unlike the more conventional approach based on two-level systems, in this approach, quantum information is encoded in two coherent states that are stabilized by parametrically driving a nonlinear resonator. I will show that a fully connected Ising problem can be mapped onto a network of such resonators, and outline an annealing protocol based on adiabatic quantum computing. During the protocol, the resonators in the network evolve from vacuum to coherent states representing the ground state configuration of the encoded problem. In short, the system evolves between two classical states following non-classical dynamics. As will be supported by numerical results, this new annealing paradigm leads to superior noise resilience. Finally, I will discuss a realistic circuit QED realization of an all-to-all connected network of parametrically driven nonlinear resonators. The continuous variable nature of the states in the large Hilbert space of the resonator provides new opportunities for exploring quantum phase transitions and non-stoquastic dynamics during the annealing schedule.

  5. An Outline of Data Aggregation Security in Heterogeneous Wireless Sensor Networks.

    PubMed

    Boubiche, Sabrina; Boubiche, Djallel Eddine; Bilami, Azzedine; Toral-Cruz, Homero

    2016-04-12

    Data aggregation processes aim to reduce the amount of exchanged data in wireless sensor networks and consequently minimize the packet overhead and optimize energy efficiency. Securing the data aggregation process is a real challenge since the aggregation nodes must access the relayed data to apply the aggregation functions. The data aggregation security problem has been widely addressed in classical homogeneous wireless sensor networks, however, most of the proposed security protocols cannot guarantee a high level of security since the sensor node resources are limited. Heterogeneous wireless sensor networks have recently emerged as a new wireless sensor network category which expands the sensor nodes' resources and capabilities. These new kinds of WSNs have opened new research opportunities where security represents a most attractive area. Indeed, robust and high security level algorithms can be used to secure the data aggregation at the heterogeneous aggregation nodes which is impossible in classical homogeneous WSNs. Contrary to the homogeneous sensor networks, the data aggregation security problem is still not sufficiently covered and the proposed data aggregation security protocols are numberless. To address this recent research area, this paper describes the data aggregation security problem in heterogeneous wireless sensor networks and surveys a few proposed security protocols. A classification and evaluation of the existing protocols is also introduced based on the adopted data aggregation security approach.

  6. Inclusion of tank configurations as a variable in the cost optimization of branched piped-water networks

    NASA Astrophysics Data System (ADS)

    Hooda, Nikhil; Damani, Om

    2017-06-01

    The classic problem of the capital cost optimization of branched piped networks consists of choosing pipe diameters for each pipe in the network from a discrete set of commercially available pipe diameters. Each pipe in the network can consist of multiple segments of differing diameters. Water networks also consist of intermediate tanks that act as buffers between incoming flow from the primary source and the outgoing flow to the demand nodes. The network from the primary source to the tanks is called the primary network, and the network from the tanks to the demand nodes is called the secondary network. During the design stage, the primary and secondary networks are optimized separately, with the tanks acting as demand nodes for the primary network. Typically the choice of tank locations, their elevations, and the set of demand nodes to be served by different tanks is manually made in an ad hoc fashion before any optimization is done. It is desirable therefore to include this tank configuration choice in the cost optimization process itself. In this work, we explain why the choice of tank configuration is important to the design of a network and describe an integer linear program model that integrates the tank configuration to the standard pipe diameter selection problem. In order to aid the designers of piped-water networks, the improved cost optimization formulation is incorporated into our existing network design system called JalTantra.

  7. The Ship of Classics: The Ark, the Titanic, or the Good Ship Lollipop?

    ERIC Educational Resources Information Center

    Wolverton, Robert E.

    Various problems confronting teachers of the classics are explored through frequent reference to the metaphor of the classics viewed as a sailing ship in a sea of troubled waters. Several of the difficulties confronting classics teachers are seen to be related to an anti-intellectual mood prevailing in academe, scheduling problems, shifting school…

  8. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods.

    PubMed

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-04-07

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell's equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions that have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called minimum norm estimates (MNE), promote source estimates with a small ℓ₂ norm. Here, we consider a more general class of priors based on mixed norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as mixed-norm estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ₁/ℓ₂ mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ₁/ℓ₂ norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furthermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data.

  9. Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods

    PubMed Central

    Gramfort, Alexandre; Kowalski, Matthieu; Hämäläinen, Matti

    2012-01-01

    Magneto- and electroencephalography (M/EEG) measure the electromagnetic fields produced by the neural electrical currents. Given a conductor model for the head, and the distribution of source currents in the brain, Maxwell’s equations allow one to compute the ensuing M/EEG signals. Given the actual M/EEG measurements and the solution of this forward problem, one can localize, in space and in time, the brain regions than have produced the recorded data. However, due to the physics of the problem, the limited number of sensors compared to the number of possible source locations, and measurement noise, this inverse problem is ill-posed. Consequently, additional constraints are needed. Classical inverse solvers, often called Minimum Norm Estimates (MNE), promote source estimates with a small ℓ2 norm. Here, we consider a more general class of priors based on mixed-norms. Such norms have the ability to structure the prior in order to incorporate some additional assumptions about the sources. We refer to such solvers as Mixed-Norm Estimates (MxNE). In the context of M/EEG, MxNE can promote spatially focal sources with smooth temporal estimates with a two-level ℓ1/ℓ2 mixed-norm, while a three-level mixed-norm can be used to promote spatially non-overlapping sources between different experimental conditions. In order to efficiently solve the optimization problems of MxNE, we introduce fast first-order iterative schemes that for the ℓ1/ℓ2 norm give solutions in a few seconds making such a prior as convenient as the simple MNE. Furhermore, thanks to the convexity of the optimization problem, we can provide optimality conditions that guarantee global convergence. The utility of the methods is demonstrated both with simulations and experimental MEG data. PMID:22421459

  10. Classical Electrodynamics: Problems with solutions; Problems with solutions

    NASA Astrophysics Data System (ADS)

    Likharev, Konstantin K.

    2018-06-01

    l Advanced Physics is a series comprising four parts: Classical Mechanics, Classical Electrodynamics, Quantum Mechanics and Statistical Mechanics. Each part consists of two volumes, Lecture notes and Problems with solutions, further supplemented by an additional collection of test problems and solutions available to qualifying university instructors. This volume, Classical Electrodynamics: Lecture notes is intended to be the basis for a two-semester graduate-level course on electricity and magnetism, including not only the interaction and dynamics charged point particles, but also properties of dielectric, conducting, and magnetic media. The course also covers special relativity, including its kinematics and particle-dynamics aspects, and electromagnetic radiation by relativistic particles.

  11. Sparse and stable Markowitz portfolios

    PubMed Central

    Brodie, Joshua; Daubechies, Ingrid; De Mol, Christine; Giannone, Domenico; Loris, Ignace

    2009-01-01

    We consider the problem of portfolio selection within the classical Markowitz mean-variance framework, reformulated as a constrained least-squares regression problem. We propose to add to the objective function a penalty proportional to the sum of the absolute values of the portfolio weights. This penalty regularizes (stabilizes) the optimization problem, encourages sparse portfolios (i.e., portfolios with only few active positions), and allows accounting for transaction costs. Our approach recovers as special cases the no-short-positions portfolios, but does allow for short positions in limited number. We implement this methodology on two benchmark data sets constructed by Fama and French. Using only a modest amount of training data, we construct portfolios whose out-of-sample performance, as measured by Sharpe ratio, is consistently and significantly better than that of the naïve evenly weighted portfolio. PMID:19617537

  12. Efficient classical simulation of the Deutsch-Jozsa and Simon's algorithms

    NASA Astrophysics Data System (ADS)

    Johansson, Niklas; Larsson, Jan-Åke

    2017-09-01

    A long-standing aim of quantum information research is to understand what gives quantum computers their advantage. This requires separating problems that need genuinely quantum resources from those for which classical resources are enough. Two examples of quantum speed-up are the Deutsch-Jozsa and Simon's problem, both efficiently solvable on a quantum Turing machine, and both believed to lack efficient classical solutions. Here we present a framework that can simulate both quantum algorithms efficiently, solving the Deutsch-Jozsa problem with probability 1 using only one oracle query, and Simon's problem using linearly many oracle queries, just as expected of an ideal quantum computer. The presented simulation framework is in turn efficiently simulatable in a classical probabilistic Turing machine. This shows that the Deutsch-Jozsa and Simon's problem do not require any genuinely quantum resources, and that the quantum algorithms show no speed-up when compared with their corresponding classical simulation. Finally, this gives insight into what properties are needed in the two algorithms and calls for further study of oracle separation between quantum and classical computation.

  13. A Novel Hybrid Clonal Selection Algorithm with Combinatorial Recombination and Modified Hypermutation Operators for Global Optimization

    PubMed Central

    Lin, Jingjing; Jing, Honglei

    2016-01-01

    Artificial immune system is one of the most recently introduced intelligence methods which was inspired by biological immune system. Most immune system inspired algorithms are based on the clonal selection principle, known as clonal selection algorithms (CSAs). When coping with complex optimization problems with the characteristics of multimodality, high dimension, rotation, and composition, the traditional CSAs often suffer from the premature convergence and unsatisfied accuracy. To address these concerning issues, a recombination operator inspired by the biological combinatorial recombination is proposed at first. The recombination operator could generate the promising candidate solution to enhance search ability of the CSA by fusing the information from random chosen parents. Furthermore, a modified hypermutation operator is introduced to construct more promising and efficient candidate solutions. A set of 16 common used benchmark functions are adopted to test the effectiveness and efficiency of the recombination and hypermutation operators. The comparisons with classic CSA, CSA with recombination operator (RCSA), and CSA with recombination and modified hypermutation operator (RHCSA) demonstrate that the proposed algorithm significantly improves the performance of classic CSA. Moreover, comparison with the state-of-the-art algorithms shows that the proposed algorithm is quite competitive. PMID:27698662

  14. On sequential data assimilation for scalar macroscopic traffic flow models

    NASA Astrophysics Data System (ADS)

    Blandin, Sébastien; Couque, Adrien; Bayen, Alexandre; Work, Daniel

    2012-09-01

    We consider the problem of sequential data assimilation for transportation networks using optimal filtering with a scalar macroscopic traffic flow model. Properties of the distribution of the uncertainty on the true state related to the specific nonlinearity and non-differentiability inherent to macroscopic traffic flow models are investigated, derived analytically and analyzed. We show that nonlinear dynamics, by creating discontinuities in the traffic state, affect the performances of classical filters and in particular that the distribution of the uncertainty on the traffic state at shock waves is a mixture distribution. The non-differentiability of traffic dynamics around stationary shock waves is also proved and the resulting optimality loss of the estimates is quantified numerically. The properties of the estimates are explicitly studied for the Godunov scheme (and thus the Cell-Transmission Model), leading to specific conclusions about their use in the context of filtering, which is a significant contribution of this article. Analytical proofs and numerical tests are introduced to support the results presented. A Java implementation of the classical filters used in this work is available on-line at http://traffic.berkeley.edu for facilitating further efforts on this topic and fostering reproducible research.

  15. A multiobjective optimization framework for multicontaminant industrial water network design.

    PubMed

    Boix, Marianne; Montastruc, Ludovic; Pibouleau, Luc; Azzaro-Pantel, Catherine; Domenech, Serge

    2011-07-01

    The optimal design of multicontaminant industrial water networks according to several objectives is carried out in this paper. The general formulation of the water allocation problem (WAP) is given as a set of nonlinear equations with binary variables representing the presence of interconnections in the network. For optimization purposes, three antagonist objectives are considered: F(1), the freshwater flow-rate at the network entrance, F(2), the water flow-rate at inlet of regeneration units, and F(3), the number of interconnections in the network. The multiobjective problem is solved via a lexicographic strategy, where a mixed-integer nonlinear programming (MINLP) procedure is used at each step. The approach is illustrated by a numerical example taken from the literature involving five processes, one regeneration unit and three contaminants. The set of potential network solutions is provided in the form of a Pareto front. Finally, the strategy for choosing the best network solution among those given by Pareto fronts is presented. This Multiple Criteria Decision Making (MCDM) problem is tackled by means of two approaches: a classical TOPSIS analysis is first implemented and then an innovative strategy based on the global equivalent cost (GEC) in freshwater that turns out to be more efficient for choosing a good network according to a practical point of view. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Practical quantum appointment scheduling

    NASA Astrophysics Data System (ADS)

    Touchette, Dave; Lovitz, Benjamin; Lütkenhaus, Norbert

    2018-04-01

    We propose a protocol based on coherent states and linear optics operations for solving the appointment-scheduling problem. Our main protocol leaks strictly less information about each party's input than the optimal classical protocol, even when considering experimental errors. Along with the ability to generate constant-amplitude coherent states over two modes, this protocol requires the ability to transfer these modes back-and-forth between the two parties multiple times with very low losses. The implementation requirements are thus still challenging. Along the way, we develop tools to study quantum information cost of interactive protocols in the finite regime.

  17. Quantum Adiabatic Algorithms and Large Spin Tunnelling

    NASA Technical Reports Server (NTRS)

    Boulatov, A.; Smelyanskiy, V. N.

    2003-01-01

    We provide a theoretical study of the quantum adiabatic evolution algorithm with different evolution paths proposed in this paper. The algorithm is applied to a random binary optimization problem (a version of the 3-Satisfiability problem) where the n-bit cost function is symmetric with respect to the permutation of individual bits. The evolution paths are produced, using the generic control Hamiltonians H (r) that preserve the bit symmetry of the underlying optimization problem. In the case where the ground state of H(0) coincides with the totally-symmetric state of an n-qubit system the algorithm dynamics is completely described in terms of the motion of a spin-n/2. We show that different control Hamiltonians can be parameterized by a set of independent parameters that are expansion coefficients of H (r) in a certain universal set of operators. Only one of these operators can be responsible for avoiding the tunnelling in the spin-n/2 system during the quantum adiabatic algorithm. We show that it is possible to select a coefficient for this operator that guarantees a polynomial complexity of the algorithm for all problem instances. We show that a successful evolution path of the algorithm always corresponds to the trajectory of a classical spin-n/2 and provide a complete characterization of such paths.

  18. Optimization of lattice surgery is NP-hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon J.

    2017-09-01

    The traditional method for computation in either the surface code or in the Raussendorf model is the creation of holes or "defects" within the encoded lattice of qubits that are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work, we focus on the lattice surgery representation, which realizes transversal logic operations without destroying the intrinsic 2D nearest-neighbor properties of the braid-based surface code and achieves universality without defects and braid-based logic. For both techniques there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult and the classical complexity associated with this problem has yet to be determined. In the context of lattice-surgery-based logic, we can introduce an optimality condition, which corresponds to a circuit with the lowest resource requirements in terms of physical qubits and computational time, and prove that the complexity of optimizing a quantum circuit in the lattice surgery model is NP-hard.

  19. Optimization of topological quantum algorithms using Lattice Surgery is hard

    NASA Astrophysics Data System (ADS)

    Herr, Daniel; Nori, Franco; Devitt, Simon

    The traditional method for computation in the surface code or the Raussendorf model is the creation of holes or ''defects'' within the encoded lattice of qubits which are manipulated via topological braiding to enact logic gates. However, this is not the only way to achieve universal, fault-tolerant computation. In this work we turn attention to the Lattice Surgery representation, which realizes encoded logic operations without destroying the intrinsic 2D nearest-neighbor interactions sufficient for braided based logic and achieves universality without using defects for encoding information. In both braided and lattice surgery logic there are open questions regarding the compilation and resource optimization of quantum circuits. Optimization in braid-based logic is proving to be difficult to define and the classical complexity associated with this problem has yet to be determined. In the context of lattice surgery based logic, we can introduce an optimality condition, which corresponds to a circuit with lowest amount of physical qubit requirements, and prove that the complexity of optimizing the geometric (lattice surgery) representation of a quantum circuit is NP-hard.

  20. Solving the dynamic ambulance relocation and dispatching problem using approximate dynamic programming

    PubMed Central

    Schmid, Verena

    2012-01-01

    Emergency service providers are supposed to locate ambulances such that in case of emergency patients can be reached in a time-efficient manner. Two fundamental decisions and choices need to be made real-time. First of all immediately after a request emerges an appropriate vehicle needs to be dispatched and send to the requests’ site. After having served a request the vehicle needs to be relocated to its next waiting location. We are going to propose a model and solve the underlying optimization problem using approximate dynamic programming (ADP), an emerging and powerful tool for solving stochastic and dynamic problems typically arising in the field of operations research. Empirical tests based on real data from the city of Vienna indicate that by deviating from the classical dispatching rules the average response time can be decreased from 4.60 to 4.01 minutes, which corresponds to an improvement of 12.89%. Furthermore we are going to show that it is essential to consider time-dependent information such as travel times and changes with respect to the request volume explicitly. Ignoring the current time and its consequences thereafter during the stage of modeling and optimization leads to suboptimal decisions. PMID:25540476

  1. Optimal allocation of trend following strategies

    NASA Astrophysics Data System (ADS)

    Grebenkov, Denis S.; Serror, Jeremy

    2015-09-01

    We consider a portfolio allocation problem for trend following (TF) strategies on multiple correlated assets. Under simplifying assumptions of a Gaussian market and linear TF strategies, we derive analytical formulas for the mean and variance of the portfolio return. We construct then the optimal portfolio that maximizes risk-adjusted return by accounting for inter-asset correlations. The dynamic allocation problem for n assets is shown to be equivalent to the classical static allocation problem for n2 virtual assets that include lead-lag corrections in positions of TF strategies. The respective roles of asset auto-correlations and inter-asset correlations are investigated in depth for the two-asset case and a sector model. In contrast to the principle of diversification suggesting to treat uncorrelated assets, we show that inter-asset correlations allow one to estimate apparent trends more reliably and to adjust the TF positions more efficiently. If properly accounted for, inter-asset correlations are not deteriorative but beneficial for portfolio management that can open new profit opportunities for trend followers. These concepts are illustrated using daily returns of three highly correlated futures markets: the E-mini S&P 500, Euro Stoxx 50 index, and the US 10-year T-note futures.

  2. Optimal reservoir operation policies using novel nested algorithms

    NASA Astrophysics Data System (ADS)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    Historically, the two most widely practiced methods for optimal reservoir operation have been dynamic programming (DP) and stochastic dynamic programming (SDP). These two methods suffer from the so called "dual curse" which prevents them to be used in reasonably complex water systems. The first one is the "curse of dimensionality" that denotes an exponential growth of the computational complexity with the state - decision space dimension. The second one is the "curse of modelling" that requires an explicit model of each component of the water system to anticipate the effect of each system's transition. We address the problem of optimal reservoir operation concerning multiple objectives that are related to 1) reservoir releases to satisfy several downstream users competing for water with dynamically varying demands, 2) deviations from the target minimum and maximum reservoir water levels and 3) hydropower production that is a combination of the reservoir water level and the reservoir releases. Addressing such a problem with classical methods (DP and SDP) requires a reasonably high level of discretization of the reservoir storage volume, which in combination with the required releases discretization for meeting the demands of downstream users leads to computationally expensive formulations and causes the curse of dimensionality. We present a novel approach, named "nested" that is implemented in DP, SDP and reinforcement learning (RL) and correspondingly three new algorithms are developed named nested DP (nDP), nested SDP (nSDP) and nested RL (nRL). The nested algorithms are composed from two algorithms: 1) DP, SDP or RL and 2) nested optimization algorithm. Depending on the way we formulate the objective function related to deficits in the allocation problem in the nested optimization, two methods are implemented: 1) Simplex for linear allocation problems, and 2) quadratic Knapsack method in the case of nonlinear problems. The novel idea is to include the nested optimization algorithm into the state transition that lowers the starting problem dimension and alleviates the curse of dimensionality. The algorithms can solve multi-objective optimization problems, without significantly increasing the complexity and the computational expenses. The algorithms can handle dense and irregular variable discretization, and are coded in Java as prototype applications. The three algorithms were tested at the multipurpose reservoir Knezevo of the Zletovica hydro-system located in the Republic of Macedonia, with eight objectives, including urban water supply, agriculture, ensuring ecological flow, and generation of hydropower. Because the Zletovica hydro-system is relatively complex, the novel algorithms were pushed to their limits, demonstrating their capabilities and limitations. The nSDP and nRL derived/learned the optimal reservoir policy using 45 (1951-1995) years historical data. The nSDP and nRL optimal reservoir policy was tested on 10 (1995-2005) years historical data, and compared with nDP optimal reservoir operation in the same period. The nested algorithms and optimal reservoir operation results are analysed and explained.

  3. Quantum approximate optimization algorithm for MaxCut: A fermionic view

    NASA Astrophysics Data System (ADS)

    Wang, Zhihui; Hadfield, Stuart; Jiang, Zhang; Rieffel, Eleanor G.

    2018-02-01

    Farhi et al. recently proposed a class of quantum algorithms, the quantum approximate optimization algorithm (QAOA), for approximately solving combinatorial optimization problems (E. Farhi et al., arXiv:1411.4028; arXiv:1412.6062; arXiv:1602.07674). A level-p QAOA circuit consists of p steps; in each step a classical Hamiltonian, derived from the cost function, is applied followed by a mixing Hamiltonian. The 2 p times for which these two Hamiltonians are applied are the parameters of the algorithm, which are to be optimized classically for the best performance. As p increases, parameter optimization becomes inefficient due to the curse of dimensionality. The success of the QAOA approach will depend, in part, on finding effective parameter-setting strategies. Here we analytically and numerically study parameter setting for the QAOA applied to MaxCut. For the level-1 QAOA, we derive an analytical expression for a general graph. In principle, expressions for higher p could be derived, but the number of terms quickly becomes prohibitive. For a special case of MaxCut, the "ring of disagrees," or the one-dimensional antiferromagnetic ring, we provide an analysis for an arbitrarily high level. Using a fermionic representation, the evolution of the system under the QAOA translates into quantum control of an ensemble of independent spins. This treatment enables us to obtain analytical expressions for the performance of the QAOA for any p . It also greatly simplifies the numerical search for the optimal values of the parameters. By exploring symmetries, we identify a lower-dimensional submanifold of interest; the search effort can be accordingly reduced. This analysis also explains an observed symmetry in the optimal parameter values. Further, we numerically investigate the parameter landscape and show that it is a simple one in the sense of having no local optima.

  4. An Efficient Bundle Adjustment Model Based on Parallax Parametrization for Environmental Monitoring

    NASA Astrophysics Data System (ADS)

    Chen, R.; Sun, Y. Y.; Lei, Y.

    2017-12-01

    With the rapid development of Unmanned Aircraft Systems (UAS), more and more research fields have been successfully equipped with this mature technology, among which is environmental monitoring. One difficult task is how to acquire accurate position of ground object in order to reconstruct the scene more accurate. To handle this problem, we combine bundle adjustment method from Photogrammetry with parallax parametrization from Computer Vision to create a new method call APCP (aerial polar-coordinate photogrammetry). One impressive advantage of this method compared with traditional method is that the 3-dimensional point in space is represented using three angles (elevation angle, azimuth angle and parallax angle) rather than the XYZ value. As the basis for APCP, bundle adjustment could be used to optimize the UAS sensors' pose accurately, reconstruct the 3D models of environment, thus serving as the criterion of accurate position for monitoring. To verity the effectiveness of the proposed method, we test on several UAV dataset obtained by non-metric digital cameras with large attitude angles, and we find that our methods could achieve 1 or 2 times better efficiency with no loss of accuracy than traditional ones. For the classical nonlinear optimization of bundle adjustment model based on the rectangular coordinate, it suffers the problem of being seriously dependent on the initial values, making it unable to converge fast or converge to a stable state. On the contrary, APCP method could deal with quite complex condition of UAS when conducting monitoring as it represent the points in space with angles, including the condition that the sequential images focusing on one object have zero parallax angle. In brief, this paper presents the parameterization of 3D feature points based on APCP, and derives a full bundle adjustment model and the corresponding nonlinear optimization problems based on this method. In addition, we analyze the influence of convergence and dependence on the initial values through math formulas. At last this paper conducts experiments using real aviation data, and proves that the new model can effectively solve bottlenecks of the classical method in a certain degree, that is, this paper provides a new idea and solution for faster and more efficient environmental monitoring.

  5. A Frequency-Domain Adaptive Matched Filter for Active Sonar Detection.

    PubMed

    Zhao, Zhishan; Zhao, Anbang; Hui, Juan; Hou, Baochun; Sotudeh, Reza; Niu, Fang

    2017-07-04

    The most classical detector of active sonar and radar is the matched filter (MF), which is the optimal processor under ideal conditions. Aiming at the problem of active sonar detection, we propose a frequency-domain adaptive matched filter (FDAMF) with the use of a frequency-domain adaptive line enhancer (ALE). The FDAMF is an improved MF. In the simulations in this paper, the signal to noise ratio (SNR) gain of the FDAMF is about 18.6 dB higher than that of the classical MF when the input SNR is -10 dB. In order to improve the performance of the FDAMF with a low input SNR, we propose a pre-processing method, which is called frequency-domain time reversal convolution and interference suppression (TRC-IS). Compared with the classical MF, the FDAMF combined with the TRC-IS method obtains higher SNR gain, a lower detection threshold, and a better receiver operating characteristic (ROC) in the simulations in this paper. The simulation results show that the FDAMF has higher processing gain and better detection performance than the classical MF under ideal conditions. The experimental results indicate that the FDAMF does improve the performance of the MF, and can adapt to actual interference in a way. In addition, the TRC-IS preprocessing method works well in an actual noisy ocean environment.

  6. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    NASA Astrophysics Data System (ADS)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  7. Homeostasis of exercise hyperpnea and optimal sensorimotor integration: the internal model paradigm.

    PubMed

    Poon, Chi-Sang; Tin, Chung; Yu, Yunguo

    2007-10-15

    Homeostasis is a basic tenet of biomedicine and an open problem for many physiological control systems. Among them, none has been more extensively studied and intensely debated than the dilemma of exercise hyperpnea - a paradoxical homeostatic increase of respiratory ventilation that is geared to metabolic demands instead of the normal chemoreflex mechanism. Classical control theory has led to a plethora of "feedback/feedforward control" or "set point" hypotheses for homeostatic regulation, yet so far none of them has proved satisfactory in explaining exercise hyperpnea and its interactions with other respiratory inputs. Instead, the available evidence points to a far more sophisticated respiratory controller capable of integrating multiple afferent and efferent signals in adapting the ventilatory pattern toward optimality relative to conflicting homeostatic, energetic and other objectives. This optimality principle parsimoniously mimics exercise hyperpnea, chemoreflex and a host of characteristic respiratory responses to abnormal gas exchange or mechanical loading/unloading in health and in cardiopulmonary diseases - all without resorting to a feedforward "exercise stimulus". Rather, an emergent controller signal encoding the projected metabolic level is predicted by the principle as an exercise-induced 'mental percept' or 'internal model', presumably engendered by associative learning (operant conditioning or classical conditioning) which achieves optimality through continuous identification of, and adaptation to, the causal relationship between respiratory motor output and resultant chemical-mechanical afferent feedbacks. This internal model self-tuning adaptive control paradigm opens a new challenge and exciting opportunity for experimental and theoretical elucidations of the mechanisms of respiratory control - and of homeostatic regulation and sensorimotor integration in general.

  8. Cost Optimal Elastic Auto-Scaling in Cloud Infrastructure

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, S.; Sidhanta, S.; Ganguly, S.; Nemani, R. R.

    2014-12-01

    Today, elastic scaling is critical part of leveraging cloud. Elastic scaling refers to adding resources only when it is needed and deleting resources when not in use. Elastic scaling ensures compute/server resources are not over provisioned. Today, Amazon and Windows Azure are the only two platform provider that allow auto-scaling of cloud resources where servers are automatically added and deleted. However, these solution falls short of following key features: A) Requires explicit policy definition such server load and therefore lacks any predictive intelligence to make optimal decision; B) Does not decide on the right size of resource and thereby does not result in cost optimal resource pool. In a typical cloud deployment model, we consider two types of application scenario: A. Batch processing jobs → Hadoop/Big Data case B. Transactional applications → Any application that process continuous transactions (Requests/response) In reference of classical queuing model, we are trying to model a scenario where servers have a price and capacity (size) and system can add delete servers to maintain a certain queue length. Classical queueing models applies to scenario where number of servers are constant. So we cannot apply stationary system analysis in this case. We investigate the following questions 1. Can we define Job queue and use the metric to define such a queue to predict the resource requirement in a quasi-stationary way? Can we map that into an optimal sizing problem? 2. Do we need to get into a level of load (CPU/Data) on server level to characterize the size requirement? How do we learn that based on Job type?

  9. Efficient Quantum Transmission in Multiple-Source Networks

    PubMed Central

    Luo, Ming-Xing; Xu, Gang; Chen, Xiu-Bo; Yang, Yi-Xian; Wang, Xiaojun

    2014-01-01

    A difficult problem in quantum network communications is how to efficiently transmit quantum information over large-scale networks with common channels. We propose a solution by developing a quantum encoding approach. Different quantum states are encoded into a coherent superposition state using quantum linear optics. The transmission congestion in the common channel may be avoided by transmitting the superposition state. For further decoding and continued transmission, special phase transformations are applied to incoming quantum states using phase shifters such that decoders can distinguish outgoing quantum states. These phase shifters may be precisely controlled using classical chaos synchronization via additional classical channels. Based on this design and the reduction of multiple-source network under the assumption of restricted maximum-flow, the optimal scheme is proposed for specially quantized multiple-source network. In comparison with previous schemes, our scheme can greatly increase the transmission efficiency. PMID:24691590

  10. Universal approximators for multi-objective direct policy search in water reservoir management problems: a comparative analysis

    NASA Astrophysics Data System (ADS)

    Giuliani, Matteo; Mason, Emanuele; Castelletti, Andrea; Pianosi, Francesca

    2014-05-01

    The optimal operation of water resources systems is a wide and challenging problem due to non-linearities in the model and the objectives, high dimensional state-control space, and strong uncertainties in the hydroclimatic regimes. The application of classical optimization techniques (e.g., SDP, Q-learning, gradient descent-based algorithms) is strongly limited by the dimensionality of the system and by the presence of multiple, conflicting objectives. This study presents a novel approach which combines Direct Policy Search (DPS) and Multi-Objective Evolutionary Algorithms (MOEAs) to solve high-dimensional state and control space problems involving multiple objectives. DPS, also known as parameterization-simulation-optimization in the water resources literature, is a simulation-based approach where the reservoir operating policy is first parameterized within a given family of functions and, then, the parameters optimized with respect to the objectives of the management problem. The selection of a suitable class of functions to which the operating policy belong to is a key step, as it might restrict the search for the optimal policy to a subspace of the decision space that does not include the optimal solution. In the water reservoir literature, a number of classes have been proposed. However, many of these rules are based largely on empirical or experimental successes and they were designed mostly via simulation and for single-purpose reservoirs. In a multi-objective context similar rules can not easily inferred from the experience and the use of universal function approximators is generally preferred. In this work, we comparatively analyze two among the most common universal approximators: artificial neural networks (ANN) and radial basis functions (RBF) under different problem settings to estimate their scalability and flexibility in dealing with more and more complex problems. The multi-purpose HoaBinh water reservoir in Vietnam, accounting for hydropower production and flood control, is used as a case study. Preliminary results show that the RBF policy parametrization is more effective than the ANN one. In particular, the approximated Pareto front obtained with RBF control policies successfully explores the full tradeoff space between the two conflicting objectives, while most of the ANN solutions results to be Pareto-dominated by the RBF ones.

  11. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  12. An efficient interior-point algorithm with new non-monotone line search filter method for nonlinear constrained programming

    NASA Astrophysics Data System (ADS)

    Wang, Liwei; Liu, Xinggao; Zhang, Zeyin

    2017-02-01

    An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.

  13. Optimal use of modern antibiotics: emerging trends.

    PubMed

    Polk, R

    1999-08-01

    Development of new antimicrobial drugs is an essential component in the effort to remain ahead of emerging microbial resistance. However, when new antibiotics are used with unrestrained enthusiasm, a predictable consequence is the further expansion of resistance. This problem is well known to the infectious diseases specialist and is increasingly appreciated by the nonspecialist and the public. A far more sensible strategy is to identify new ways to use these drugs to increase the duration of their usefulness. New methods to optimize antibiotic selection, dose, and duration of therapy are being investigated, and application of some of these strategies has been shown to have a favorable impact on resistance. Much of the classic thinking of how to use antibiotics is changing, and these newer strategies may result in prolongation of the era of the "antibiotic miracle."

  14. An enhanced artificial bee colony algorithm (EABC) for solving dispatching of hydro-thermal system (DHTS) problem

    PubMed Central

    Yu, Yi; Hu, Binqi; Liu, Xinglong

    2018-01-01

    The dispatching of hydro-thermal system is a nonlinear programming problem with multiple constraints and high dimensions and the solution techniques of the model have been a hotspot in research. Based on the advantage of that the artificial bee colony algorithm (ABC) can efficiently solve the high-dimensional problem, an improved artificial bee colony algorithm has been proposed to solve DHTS problem in this paper. The improvements of the proposed algorithm include two aspects. On one hand, local search can be guided in efficiency by the information of the global optimal solution and its gradient in each generation. The global optimal solution improves the search efficiency of the algorithm but loses diversity, while the gradient can weaken the loss of diversity caused by the global optimal solution. On the other hand, inspired by genetic algorithm, the nectar resource which has not been updated in limit generation is transformed to a new one by using selection, crossover and mutation, which can ensure individual diversity and make full use of prior information for improving the global search ability of the algorithm. The two improvements of ABC algorithm are proved to be effective via a classical numeral example at last. Among which the genetic operator for the promotion of the ABC algorithm’s performance is significant. The results are also compared with those of other state-of-the-art algorithms, the enhanced ABC algorithm has general advantages in minimum cost, average cost and maximum cost which shows its usability and effectiveness. The achievements in this paper provide a new method for solving the DHTS problems, and also offer a novel reference for the improvement of mechanism and the application of algorithms. PMID:29324743

  15. An enhanced artificial bee colony algorithm (EABC) for solving dispatching of hydro-thermal system (DHTS) problem.

    PubMed

    Yu, Yi; Wu, Yonggang; Hu, Binqi; Liu, Xinglong

    2018-01-01

    The dispatching of hydro-thermal system is a nonlinear programming problem with multiple constraints and high dimensions and the solution techniques of the model have been a hotspot in research. Based on the advantage of that the artificial bee colony algorithm (ABC) can efficiently solve the high-dimensional problem, an improved artificial bee colony algorithm has been proposed to solve DHTS problem in this paper. The improvements of the proposed algorithm include two aspects. On one hand, local search can be guided in efficiency by the information of the global optimal solution and its gradient in each generation. The global optimal solution improves the search efficiency of the algorithm but loses diversity, while the gradient can weaken the loss of diversity caused by the global optimal solution. On the other hand, inspired by genetic algorithm, the nectar resource which has not been updated in limit generation is transformed to a new one by using selection, crossover and mutation, which can ensure individual diversity and make full use of prior information for improving the global search ability of the algorithm. The two improvements of ABC algorithm are proved to be effective via a classical numeral example at last. Among which the genetic operator for the promotion of the ABC algorithm's performance is significant. The results are also compared with those of other state-of-the-art algorithms, the enhanced ABC algorithm has general advantages in minimum cost, average cost and maximum cost which shows its usability and effectiveness. The achievements in this paper provide a new method for solving the DHTS problems, and also offer a novel reference for the improvement of mechanism and the application of algorithms.

  16. Lie-algebraic Approach to Dynamics of Closed Quantum Systems and Quantum-to-Classical Correspondence

    NASA Astrophysics Data System (ADS)

    Galitski, Victor

    2012-02-01

    I will briefly review our recent work on a Lie-algebraic approach to various non-equilibrium quantum-mechanical problems, which has been motivated by continuous experimental advances in the field of cold atoms. First, I will discuss non-equilibrium driven dynamics of a generic closed quantum system. It will be emphasized that mathematically a non-equilibrium Hamiltonian represents a trajectory in a Lie algebra, while the evolution operator is a trajectory in a Lie group generated by the underlying algebra via exponentiation. This turns out to be a constructive statement that establishes, in particular, the fact that classical and quantum unitary evolutions are two sides of the same coin determined uniquely by the same dynamic generators in the group. An equation for these generators - dubbed dual Schr"odinger-Bloch equation - will be derived and analyzed for a few of specific examples. This non-linear equation allows one to construct new exact non-linear solutions to quantum-dynamical systems. An experimentally-relevant example of a family of exact solutions to the many-body Landau-Zener problem will be presented. One practical application of the latter result includes dynamical means to optimize molecular production rate following a quench across the Feshbach resonance.

  17. Quad-rotor flight path energy optimization

    NASA Astrophysics Data System (ADS)

    Kemper, Edward

    Quad-Rotor unmanned areal vehicles (UAVs) have been a popular area of research and development in the last decade, especially with the advent of affordable microcontrollers like the MSP 430 and the Raspberry Pi. Path-Energy Optimization is an area that is well developed for linear systems. In this thesis, this idea of path-energy optimization is extended to the nonlinear model of the Quad-rotor UAV. The classical optimization technique is adapted to the nonlinear model that is derived for the problem at hand, coming up with a set of partial differential equations and boundary value conditions to solve these equations. Then, different techniques to implement energy optimization algorithms are tested using simulations in Python. First, a purely nonlinear approach is used. This method is shown to be computationally intensive, with no practical solution available in a reasonable amount of time. Second, heuristic techniques to minimize the energy of the flight path are tested, using Ziegler-Nichols' proportional integral derivative (PID) controller tuning technique. Finally, a brute force look-up table based PID controller is used. Simulation results of the heuristic method show that both reliable control of the system and path-energy optimization are achieved in a reasonable amount of time.

  18. A Fuzzy Approach of the Competition on the Air Transport Market

    NASA Technical Reports Server (NTRS)

    Charfeddine, Souhir; DeColigny, Marc; Camino, Felix Mora; Cosenza, Carlos Alberto Nunes

    2003-01-01

    The aim of this communication is to study with a new scope the conditions of the equilibrium in an air transport market where two competitive airlines are operating. Each airline is supposed to adopt a strategy maximizing its profit while its estimation of the demand has a fuzzy nature. This leads each company to optimize a program of its proposed services (frequency of the flights and ticket prices) characterized by some fuzzy parameters. The case of monopoly is being taken as a benchmark. Classical convex optimization can be used to solve this decision problem. This approach provides the airline with a new decision tool where uncertainty can be taken into account explicitly. The confrontation of the strategies of the companies, in the ease of duopoly, leads to the definition of a fuzzy equilibrium. This concept of fuzzy equilibrium is more general and can be applied to several other domains. The formulation of the optimization problem and the methodological consideration adopted for its resolution are presented in their general theoretical aspect. In the case of air transportation, where the conditions of management of operations are critical, this approach should offer to the manager elements needed to the consolidation of its decisions depending on the circumstances (ordinary, exceptional events,..) and to be prepared to face all possibilities. Keywords: air transportation, competition equilibrium, convex optimization , fuzzy modeling,

  19. A stable and accurate partitioned algorithm for conjugate heat transfer

    NASA Astrophysics Data System (ADS)

    Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.

    2017-09-01

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.

  20. A stable and accurate partitioned algorithm for conjugate heat transfer

    DOE PAGES

    Meng, F.; Banks, J. W.; Henshaw, W. D.; ...

    2017-04-25

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  1. Network of time-multiplexed optical parametric oscillators as a coherent Ising machine

    NASA Astrophysics Data System (ADS)

    Marandi, Alireza; Wang, Zhe; Takata, Kenta; Byer, Robert L.; Yamamoto, Yoshihisa

    2014-12-01

    Finding the ground states of the Ising Hamiltonian maps to various combinatorial optimization problems in biology, medicine, wireless communications, artificial intelligence and social network. So far, no efficient classical and quantum algorithm is known for these problems and intensive research is focused on creating physical systems—Ising machines—capable of finding the absolute or approximate ground states of the Ising Hamiltonian. Here, we report an Ising machine using a network of degenerate optical parametric oscillators (OPOs). Spins are represented with above-threshold binary phases of the OPOs and the Ising couplings are realized by mutual injections. The network is implemented in a single OPO ring cavity with multiple trains of femtosecond pulses and configurable mutual couplings, and operates at room temperature. We programmed a small non-deterministic polynomial time-hard problem on a 4-OPO Ising machine and in 1,000 runs no computational error was detected.

  2. Gelfand-type problem for two-phase porous media

    PubMed Central

    Gordon, Peter V.; Moroz, Vitaly

    2014-01-01

    We consider a generalization of the Gelfand problem arising in Frank-Kamenetskii theory of thermal explosion. This generalization is a natural extension of the Gelfand problem to two-phase materials, where, in contrast to the classical Gelfand problem which uses a single temperature approach, the state of the system is described by two different temperatures. We show that similar to the classical Gelfand problem the thermal explosion occurs exclusively owing to the absence of stationary temperature distribution. We also show that the presence of interphase heat exchange delays a thermal explosion. Moreover, we prove that in the limit of infinite heat exchange between phases the problem of thermal explosion in two-phase porous media reduces to the classical Gelfand problem with renormalized constants. PMID:24611025

  3. A Systematic Comparison between Classical Optimal Scaling and the Two-Parameter IRT Model

    ERIC Educational Resources Information Center

    Warrens, Matthijs J.; de Gruijter, Dato N. M.; Heiser, Willem J.

    2007-01-01

    In this article, the relationship between two alternative methods for the analysis of multivariate categorical data is systematically explored. It is shown that the person score of the first dimension of classical optimal scaling correlates strongly with the latent variable for the two-parameter item response theory (IRT) model. Next, under the…

  4. Optimal and adaptive methods of processing hydroacoustic signals (review)

    NASA Astrophysics Data System (ADS)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  5. An Outline of Data Aggregation Security in Heterogeneous Wireless Sensor Networks

    PubMed Central

    Boubiche, Sabrina; Boubiche, Djallel Eddine; Bilami, Azzedine; Toral-Cruz, Homero

    2016-01-01

    Data aggregation processes aim to reduce the amount of exchanged data in wireless sensor networks and consequently minimize the packet overhead and optimize energy efficiency. Securing the data aggregation process is a real challenge since the aggregation nodes must access the relayed data to apply the aggregation functions. The data aggregation security problem has been widely addressed in classical homogeneous wireless sensor networks, however, most of the proposed security protocols cannot guarantee a high level of security since the sensor node resources are limited. Heterogeneous wireless sensor networks have recently emerged as a new wireless sensor network category which expands the sensor nodes’ resources and capabilities. These new kinds of WSNs have opened new research opportunities where security represents a most attractive area. Indeed, robust and high security level algorithms can be used to secure the data aggregation at the heterogeneous aggregation nodes which is impossible in classical homogeneous WSNs. Contrary to the homogeneous sensor networks, the data aggregation security problem is still not sufficiently covered and the proposed data aggregation security protocols are numberless. To address this recent research area, this paper describes the data aggregation security problem in heterogeneous wireless sensor networks and surveys a few proposed security protocols. A classification and evaluation of the existing protocols is also introduced based on the adopted data aggregation security approach. PMID:27077866

  6. A New TS Algorithm for Solving Low-Carbon Logistics Vehicle Routing Problem with Split Deliveries by Backpack—From a Green Operation Perspective

    PubMed Central

    Fu, Zhuo; Wang, Jiangtao

    2018-01-01

    In order to promote the development of low-carbon logistics and economize logistics distribution costs, the vehicle routing problem with split deliveries by backpack is studied. With the help of the model of classical capacitated vehicle routing problem, in this study, a form of discrete split deliveries was designed in which the customer demand can be split only by backpack. A double-objective mathematical model and the corresponding adaptive tabu search (TS) algorithm were constructed for solving this problem. By embedding the adaptive penalty mechanism, and adopting the random neighborhood selection strategy and reinitialization principle, the global optimization ability of the new algorithm was enhanced. Comparisons with the results in the literature show the effectiveness of the proposed algorithm. The proposed method can save the costs of low-carbon logistics and reduce carbon emissions, which is conducive to the sustainable development of low-carbon logistics. PMID:29747469

  7. Distributed multi-sensor particle filter for bearings-only tracking

    NASA Astrophysics Data System (ADS)

    Zhang, Jungen; Ji, Hongbing

    2012-02-01

    In this article, the classical bearings-only tracking (BOT) problem for a single target is addressed, which belongs to the general class of non-linear filtering problems. Due to the fact that the radial distance observability of the target is poor, the algorithm-based sequential Monte-Carlo (particle filtering, PF) methods generally show instability and filter divergence. A new stable distributed multi-sensor PF method is proposed for BOT. The sensors process their measurements at their sites using a hierarchical PF approach, which transforms the BOT problem from Cartesian coordinate to the logarithmic polar coordinate and separates the observable components from the unobservable components of the target. In the fusion centre, the target state can be estimated by utilising the multi-sensor optimal information fusion rule. Furthermore, the computation of a theoretical Cramer-Rao lower bound is given for the multi-sensor BOT problem. Simulation results illustrate that the proposed tracking method can provide better performances than the traditional PF method.

  8. Towards a PTAS for the generalized TSP in grid clusters

    NASA Astrophysics Data System (ADS)

    Khachay, Michael; Neznakhina, Katherine

    2016-10-01

    The Generalized Traveling Salesman Problem (GTSP) is a combinatorial optimization problem, which is to find a minimum cost cycle visiting one point (city) from each cluster exactly. We consider a geometric case of this problem, where n nodes are given inside the integer grid (in the Euclidean plane), each grid cell is a unit square. Clusters are induced by cells `populated' by nodes of the given instance. Even in this special setting, the GTSP remains intractable enclosing the classic Euclidean TSP on the plane. Recently, it was shown that the problem has (1.5+8√2+ɛ)-approximation algorithm with complexity bound depending on n and k polynomially, where k is the number of clusters. In this paper, we propose two approximation algorithms for the Euclidean GTSP on grid clusters. For any fixed k, both algorithms are PTAS. Time complexity of the first one remains polynomial for k = O(log n) while the second one is a PTAS, when k = n - O(log n).

  9. Leaky GFD problems

    NASA Astrophysics Data System (ADS)

    Chumakova, Lyubov; Rzeznik, Andrew; Rosales, Rodolfo R.

    2017-11-01

    In many dispersive/conservative wave problems, waves carry energy outside of the domain of interest and never return. Inside the domain of interest, this wave leakage acts as an effective dissipation mechanism, causing solutions to decay. In classical geophysical fluid dynamics problems this scenario occurs in the troposphere, if one assumes a homogeneous stratosphere. In this talk we present several classic GFD problems, where we seek the solution in the troposphere alone. Assuming that upward propagating waves that reach the stratosphere never return, we demonstrate how classic baroclinic modes become leaky, with characteristic decay time-scales that can be calculated. We also show how damping due to wave leakage changes the classic baroclinic instability problem in the presence of shear. This presentation is a part of a joint project. The mathematical approach used here relies on extending the classical concept of group velocity to leaky waves with complex wavenumber and frequency, which will be presented at this meeting by A. Rzeznik in the talk ``Group Velocity for Leaky Waves''. This research is funded by the Royal Soc. of Edinburgh, Scottish Government, and NSF.

  10. Flexible resources for quantum metrology

    NASA Astrophysics Data System (ADS)

    Friis, Nicolai; Orsucci, Davide; Skotiniotis, Michalis; Sekatski, Pavel; Dunjko, Vedran; Briegel, Hans J.; Dür, Wolfgang

    2017-06-01

    Quantum metrology offers a quadratic advantage over classical approaches to parameter estimation problems by utilising entanglement and nonclassicality. However, the hurdle of actually implementing the necessary quantum probe states and measurements, which vary drastically for different metrological scenarios, is usually not taken into account. We show that for a wide range of tasks in metrology, 2D cluster states (a particular family of states useful for measurement-based quantum computation) can serve as flexible resources that allow one to efficiently prepare any required state for sensing, and perform appropriate (entangled) measurements using only single qubit operations. Crucially, the overhead in the number of qubits is less than quadratic, thus preserving the quantum scaling advantage. This is ensured by using a compression to a logarithmically sized space that contains all relevant information for sensing. We specifically demonstrate how our method can be used to obtain optimal scaling for phase and frequency estimation in local estimation problems, as well as for the Bayesian equivalents with Gaussian priors of varying widths. Furthermore, we show that in the paradigmatic case of local phase estimation 1D cluster states are sufficient for optimal state preparation and measurement.

  11. Synthesis of a controller for stabilizing the motion of a rigid body about a fixed point

    NASA Astrophysics Data System (ADS)

    Zabolotnov, Yu. M.; Lobanov, A. A.

    2017-05-01

    A method for the approximate design of an optimal controller for stabilizing the motion of a rigid body about a fixed point is considered. It is assumed that rigid body motion is nearly the motion in the classical Lagrange case. The method is based on the common use of the Bellman dynamic programming principle and the averagingmethod. The latter is used to solve theHamilton-Jacobi-Bellman equation approximately, which permits synthesizing the controller. The proposed method for controller design can be used in many problems close to the problem of motion of the Lagrange top (the motion of a rigid body in the atmosphere, the motion of a rigid body fastened to a cable in deployment of the orbital cable system, etc.).

  12. Two fast approximate wavelet algorithms for image processing, classification, and recognition

    NASA Astrophysics Data System (ADS)

    Wickerhauser, Mladen V.

    1994-07-01

    We use large libraries of template waveforms with remarkable orthogonality properties to recast the relatively complex principal orthogonal decomposition (POD) into an optimization problem with a fast solution algorithm. Then it becomes practical to use POD to solve two related problems: recognizing or classifying images, and inverting a complicated map from a low-dimensional configuration space to a high-dimensional measurement space. In the case where the number N of pixels or measurements is more than 1000 or so, the classical O(N3) POD algorithms becomes very costly, but it can be replaced with an approximate best-basis method that has complexity O(N2logN). A variation of POD can also be used to compute an approximate Jacobian for the complicated map.

  13. Dynamic vehicle routing with time windows in theory and practice.

    PubMed

    Yang, Zhiwei; van Osta, Jan-Paul; van Veen, Barry; van Krevelen, Rick; van Klaveren, Richard; Stam, Andries; Kok, Joost; Bäck, Thomas; Emmerich, Michael

    2017-01-01

    The vehicle routing problem is a classical combinatorial optimization problem. This work is about a variant of the vehicle routing problem with dynamically changing orders and time windows. In real-world applications often the demands change during operation time. New orders occur and others are canceled. In this case new schedules need to be generated on-the-fly. Online optimization algorithms for dynamical vehicle routing address this problem but so far they do not consider time windows. Moreover, to match the scenarios found in real-world problems adaptations of benchmarks are required. In this paper, a practical problem is modeled based on the procedure of daily routing of a delivery company. New orders by customers are introduced dynamically during the working day and need to be integrated into the schedule. A multiple ant colony algorithm combined with powerful local search procedures is proposed to solve the dynamic vehicle routing problem with time windows. The performance is tested on a new benchmark based on simulations of a working day. The problems are taken from Solomon's benchmarks but a certain percentage of the orders are only revealed to the algorithm during operation time. Different versions of the MACS algorithm are tested and a high performing variant is identified. Finally, the algorithm is tested in situ: In a field study, the algorithm schedules a fleet of cars for a surveillance company. We compare the performance of the algorithm to that of the procedure used by the company and we summarize insights gained from the implementation of the real-world study. The results show that the multiple ant colony algorithm can get a much better solution on the academic benchmark problem and also can be integrated in a real-world environment.

  14. How Do Severe Constraints Affect the Search Ability of Multiobjective Evolutionary Algorithms in Water Resources?

    NASA Astrophysics Data System (ADS)

    Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.

    2015-12-01

    This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or more metrics.

  15. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  16. Optimizing Natural Gas Networks through Dynamic Manifold Theory and a Decentralized Algorithm: Belgium Case Study

    NASA Astrophysics Data System (ADS)

    Koch, Caleb; Winfrey, Leigh

    2014-10-01

    Natural Gas is a major energy source in Europe, yet political instabilities have the potential to disrupt access and supply. Energy resilience is an increasingly essential construct and begins with transmission network design. This study proposes a new way of thinking about modelling natural gas flow. Rather than relying on classical economic models, this problem is cast into a time-dependent Hamiltonian dynamics discussion. Traditional Natural Gas constraints, including inelastic demand and maximum/minimum pipe flows, are portrayed as energy functions and built into the dynamics of each pipe flow. Doing so allows the constraints to be built into the dynamics of each pipeline. As time progresses in the model, natural gas flow rates find the minimum energy, thus the optimal gas flow rates. The most important result of this study is using dynamical principles to ensure the output of natural gas at demand nodes remains constant, which is important for country to country natural gas transmission. Another important step in this study is building the dynamics of each flow in a decentralized algorithm format. Decentralized regulation has solved congestion problems for internet data flow, traffic flow, epidemiology, and as demonstrated in this study can solve the problem of Natural Gas congestion. A mathematical description is provided for how decentralized regulation leads to globally optimized network flow. Furthermore, the dynamical principles and decentralized algorithm are applied to a case study of the Fluxys Belgium Natural Gas Network.

  17. Non-convex optimization for self-calibration of direction-dependent effects in radio interferometric imaging

    NASA Astrophysics Data System (ADS)

    Repetti, Audrey; Birdi, Jasleen; Dabbech, Arwa; Wiaux, Yves

    2017-10-01

    Radio interferometric imaging aims to estimate an unknown sky intensity image from degraded observations, acquired through an antenna array. In the theoretical case of a perfectly calibrated array, it has been shown that solving the corresponding imaging problem by iterative algorithms based on convex optimization and compressive sensing theory can be competitive with classical algorithms such as clean. However, in practice, antenna-based gains are unknown and have to be calibrated. Future radio telescopes, such as the Square Kilometre Array, aim at improving imaging resolution and sensitivity by orders of magnitude. At this precision level, the direction-dependency of the gains must be accounted for, and radio interferometric imaging can be understood as a blind deconvolution problem. In this context, the underlying minimization problem is non-convex, and adapted techniques have to be designed. In this work, leveraging recent developments in non-convex optimization, we propose the first joint calibration and imaging method in radio interferometry, with proven convergence guarantees. Our approach, based on a block-coordinate forward-backward algorithm, jointly accounts for visibilities and suitable priors on both the image and the direction-dependent effects (DDEs). As demonstrated in recent works, sparsity remains the prior of choice for the image, while DDEs are modelled as smooth functions of the sky, I.e. spatially band-limited. Finally, we show through simulations the efficiency of our method, for the reconstruction of both images of point sources and complex extended sources. matlab code is available on GitHub.

  18. Arm Locking for the Laser Interferometer Space Antenna

    NASA Technical Reports Server (NTRS)

    Maghami, P. G.; Thorpe, J. I.; Livas, J.

    2009-01-01

    The Laser Interferometer Space Antenna (LISA) mission is a planned gravitational wave detector consisting of three spacecraft in heliocentric orbit. Laser interferometry is used to measure distance fluctuations between test masses aboard each spacecraft to the picometer level over a 5 million kilometer separation. Laser frequency fluctuations must be suppressed in order to meet the measurement requirements. Arm-locking, a technique that uses the constellation of spacecraft as a frequency reference, is a proposed method for stabilizing the laser frequency. We consider the problem of arm-locking using classical optimal control theory and find that our designs satisfy the LISA requirements.

  19. A parallel competitive Particle Swarm Optimization for non-linear first arrival traveltime tomography and uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Luu, Keurfon; Noble, Mark; Gesret, Alexandrine; Belayouni, Nidhal; Roux, Pierre-François

    2018-04-01

    Seismic traveltime tomography is an optimization problem that requires large computational efforts. Therefore, linearized techniques are commonly used for their low computational cost. These local optimization methods are likely to get trapped in a local minimum as they critically depend on the initial model. On the other hand, global optimization methods based on MCMC are insensitive to the initial model but turn out to be computationally expensive. Particle Swarm Optimization (PSO) is a rather new global optimization approach with few tuning parameters that has shown excellent convergence rates and is straightforwardly parallelizable, allowing a good distribution of the workload. However, while it can traverse several local minima of the evaluated misfit function, classical implementation of PSO can get trapped in local minima at later iterations as particles inertia dim. We propose a Competitive PSO (CPSO) to help particles to escape from local minima with a simple implementation that improves swarm's diversity. The model space can be sampled by running the optimizer multiple times and by keeping all the models explored by the swarms in the different runs. A traveltime tomography algorithm based on CPSO is successfully applied on a real 3D data set in the context of induced seismicity.

  20. A fast elitism Gaussian estimation of distribution algorithm and application for PID optimization.

    PubMed

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA.

  1. A Fast Elitism Gaussian Estimation of Distribution Algorithm and Application for PID Optimization

    PubMed Central

    Xu, Qingyang; Zhang, Chengjin; Zhang, Li

    2014-01-01

    Estimation of distribution algorithm (EDA) is an intelligent optimization algorithm based on the probability statistics theory. A fast elitism Gaussian estimation of distribution algorithm (FEGEDA) is proposed in this paper. The Gaussian probability model is used to model the solution distribution. The parameters of Gaussian come from the statistical information of the best individuals by fast learning rule. A fast learning rule is used to enhance the efficiency of the algorithm, and an elitism strategy is used to maintain the convergent performance. The performances of the algorithm are examined based upon several benchmarks. In the simulations, a one-dimensional benchmark is used to visualize the optimization process and probability model learning process during the evolution, and several two-dimensional and higher dimensional benchmarks are used to testify the performance of FEGEDA. The experimental results indicate the capability of FEGEDA, especially in the higher dimensional problems, and the FEGEDA exhibits a better performance than some other algorithms and EDAs. Finally, FEGEDA is used in PID controller optimization of PMSM and compared with the classical-PID and GA. PMID:24892059

  2. Optimal de novo design of MRM experiments for rapid assay development in targeted proteomics.

    PubMed

    Bertsch, Andreas; Jung, Stephan; Zerck, Alexandra; Pfeifer, Nico; Nahnsen, Sven; Henneges, Carsten; Nordheim, Alfred; Kohlbacher, Oliver

    2010-05-07

    Targeted proteomic approaches such as multiple reaction monitoring (MRM) overcome problems associated with classical shotgun mass spectrometry experiments. Developing MRM quantitation assays can be time consuming, because relevant peptide representatives of the proteins must be found and their retention time and the product ions must be determined. Given the transitions, hundreds to thousands of them can be scheduled into one experiment run. However, it is difficult to select which of the transitions should be included into a measurement. We present a novel algorithm that allows the construction of MRM assays from the sequence of the targeted proteins alone. This enables the rapid development of targeted MRM experiments without large libraries of transitions or peptide spectra. The approach relies on combinatorial optimization in combination with machine learning techniques to predict proteotypicity, retention time, and fragmentation of peptides. The resulting potential transitions are scheduled optimally by solving an integer linear program. We demonstrate that fully automated construction of MRM experiments from protein sequences alone is possible and over 80% coverage of the targeted proteins can be achieved without further optimization of the assay.

  3. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    PubMed

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  4. Using CAS to Solve Classical Mathematics Problems

    ERIC Educational Resources Information Center

    Burke, Maurice J.; Burroughs, Elizabeth A.

    2009-01-01

    Historically, calculus has displaced many algebraic methods for solving classical problems. This article illustrates an algebraic method for finding the zeros of polynomial functions that is closely related to Newton's method (devised in 1669, published in 1711), which is encountered in calculus. By exploring this problem, precalculus students…

  5. Global design of satellite constellations: a multi-criteria performance comparison of classical walker patterns and new design patterns

    NASA Astrophysics Data System (ADS)

    Lansard, Erick; Frayssinhes, Eric; Palmade, Jean-Luc

    Basically, the problem of designing a multisatellite constellation exhibits a lot of parameters with many possible combinations: total number of satellites, orbital parameters of each individual satellite, number of orbital planes, number of satellites in each plane, spacings between satellites of each plane, spacings between orbital planes, relative phasings between consecutive orbital planes. Hopefully, some authors have theoretically solved this complex problem under simplified assumptions: the permanent (or continuous) coverage by a single and multiple satellites of the whole Earth and zonal areas has been entirely solved from a pure geometrical point of view. These solutions exhibit strong symmetry properties (e.g. Walker, Ballard, Rider, Draim constellations): altitude and inclination are identical, orbital planes and satellites are regularly spaced, etc. The problem with such constellations is their oversimplified and restricted geometrical assumption. In fact, the evaluation function which is used implicitly only takes into account the point-to-point visibility between users and satellites and does not deal with very important constraints and considerations that become mandatory when designing a real satellite system (e.g. robustness to satellite failures, total system cost, common view between satellites and ground stations, service availability and satellite reliability, launch and early operations phase, production constraints, etc.). An original and global methodology relying on a powerful optimization tool based on genetic algorithms has been developed at ALCATEL ESPACE. In this approach, symmetrical constellations can be used as initial conditions of the optimization process together with specific evaluation functions. A multi-criteria performance analysis is conducted and presented here in a parametric way in order to identify and evaluate the main sensitive parameters. Quantitative results are given for three examples in the fields of navigation, telecommunication and multimedia satellite systems. In particular, a new design pattern with very efficient properties in terms of robustness to satellite failures is presented and compared with classical Walker patterns.

  6. Optimal bit allocation for hybrid scalable/multiple-description video transmission over wireless channels

    NASA Astrophysics Data System (ADS)

    Jubran, Mohammad K.; Bansal, Manu; Kondi, Lisimachos P.

    2006-01-01

    In this paper, we consider the problem of optimal bit allocation for wireless video transmission over fading channels. We use a newly developed hybrid scalable/multiple-description codec that combines the functionality of both scalable and multiple-description codecs. It produces a base layer and multiple-description enhancement layers. Any of the enhancement layers can be decoded (in a non-hierarchical manner) with the base layer to improve the reconstructed video quality. Two different channel coding schemes (Rate-Compatible Punctured Convolutional (RCPC)/Cyclic Redundancy Check (CRC) coding and, product code Reed Solomon (RS)+RCPC/CRC coding) are used for unequal error protection of the layered bitstream. Optimal allocation of the bitrate between source and channel coding is performed for discrete sets of source coding rates and channel coding rates. Experimental results are presented for a wide range of channel conditions. Also, comparisons with classical scalable coding show the effectiveness of using hybrid scalable/multiple-description coding for wireless transmission.

  7. Optimal Control Allocation with Load Sensor Feedback for Active Load Suppression, Experiment Development

    NASA Technical Reports Server (NTRS)

    Miller, Christopher J.; Goodrick, Dan

    2017-01-01

    The problem of control command and maneuver induced structural loads is an important aspect of any control system design. The aircraft structure and the control architecture must be designed to achieve desired piloted control responses while limiting the imparted structural loads. The classical approach is to utilize high structural margins, restrict control surface commands to a limited set of analyzed combinations, and train pilots to follow procedural maneuvering limitations. With recent advances in structural sensing and the continued desire to improve safety and vehicle fuel efficiency, it is both possible and desirable to develop control architectures that enable lighter vehicle weights while maintaining and improving protection against structural damage. An optimal control technique has been explored and shown to achieve desirable vehicle control performance while limiting sensed structural loads. The subject of this paper is the design of the optimal control architecture, and provides the reader with some techniques for tailoring the architecture, along with detailed simulation results.

  8. Do the right thing: the assumption of optimality in lay decision theory and causal judgment.

    PubMed

    Johnson, Samuel G B; Rips, Lance J

    2015-03-01

    Human decision-making is often characterized as irrational and suboptimal. Here we ask whether people nonetheless assume optimal choices from other decision-makers: Are people intuitive classical economists? In seven experiments, we show that an agent's perceived optimality in choice affects attributions of responsibility and causation for the outcomes of their actions. We use this paradigm to examine several issues in lay decision theory, including how responsibility judgments depend on the efficacy of the agent's actual and counterfactual choices (Experiments 1-3), individual differences in responsibility assignment strategies (Experiment 4), and how people conceptualize decisions involving trade-offs among multiple goals (Experiments 5-6). We also find similar results using everyday decision problems (Experiment 7). Taken together, these experiments show that attributions of responsibility depend not only on what decision-makers do, but also on the quality of the options they choose not to take. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach

    NASA Technical Reports Server (NTRS)

    Das, Santanu; Oza, Nikunj C.

    2011-01-01

    In this paper we propose an innovative learning algorithm - a variation of One-class nu Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class nu SVMs while reducing both training time and test time by several factors.

  10. Monitoring and decision making by people in man machine systems

    NASA Technical Reports Server (NTRS)

    Johannsen, G.

    1979-01-01

    The analysis of human monitoring and decision making behavior as well as its modeling are described. Classic and optimal control theoretical, monitoring models are surveyed. The relationship between attention allocation and eye movements is discussed. As an example of applications, the evaluation of predictor displays by means of the optimal control model is explained. Fault detection involving continuous signals and decision making behavior of a human operator engaged in fault diagnosis during different operation and maintenance situations are illustrated. Computer aided decision making is considered as a queueing problem. It is shown to what extent computer aids can be based on the state of human activity as measured by psychophysiological quantities. Finally, management information systems for different application areas are mentioned. The possibilities of mathematical modeling of human behavior in complex man machine systems are also critically assessed.

  11. Designing, programming, and optimizing a (small) quantum computer

    NASA Astrophysics Data System (ADS)

    Svore, Krysta

    In 1982, Richard Feynman proposed to use a computer founded on the laws of quantum physics to simulate physical systems. In the more than thirty years since, quantum computers have shown promise to solve problems in number theory, chemistry, and materials science that would otherwise take longer than the lifetime of the universe to solve on an exascale classical machine. The practical realization of a quantum computer requires understanding and manipulating subtle quantum states while experimentally controlling quantum interference. It also requires an end-to-end software architecture for programming, optimizing, and implementing a quantum algorithm on the quantum device hardware. In this talk, we will introduce recent advances in connecting abstract theory to present-day real-world applications through software. We will highlight recent advancement of quantum algorithms and the challenges in ultimately performing a scalable solution on a quantum device.

  12. The capacity to transmit classical information via black holes

    NASA Astrophysics Data System (ADS)

    Adami, Christoph; Ver Steeg, Greg

    2013-03-01

    One of the most vexing problems in theoretical physics is the relationship between quantum mechanics and gravity. According to an argument originally by Hawking, a black hole must destroy any information that is incident on it because the only radiation that a black hole releases during its evaporation (the Hawking radiation) is precisely thermal. Surprisingly, this claim has never been investigated within a quantum information-theoretic framework, where the black hole is treated as a quantum channel to transmit classical information. We calculate the capacity of the quantum black hole channel to transmit classical information (the Holevo capacity) within curved-space quantum field theory, and show that the information carried by late-time particles sent into a black hole can be recovered with arbitrary accuracy, from the signature left behind by the stimulated emission of radiation that must accompany any absorption event. We also show that this stimulated emission turns the black hole into an almost-optimal quantum cloning machine, where the violation of the no-cloning theorem is ensured by the noise provided by the Hawking radiation. Thus, rather than threatening the consistency of theoretical physics, Hawking radiation manages to save it instead.

  13. A multilevel ant colony optimization algorithm for classical and isothermic DNA sequencing by hybridization with multiplicity information available.

    PubMed

    Kwarciak, Kamil; Radom, Marcin; Formanowicz, Piotr

    2016-04-01

    The classical sequencing by hybridization takes into account a binary information about sequence composition. A given element from an oligonucleotide library is or is not a part of the target sequence. However, the DNA chip technology has been developed and it enables to receive a partial information about multiplicity of each oligonucleotide the analyzed sequence consist of. Currently, it is not possible to assess the exact data of such type but even partial information should be very useful. Two realistic multiplicity information models are taken into consideration in this paper. The first one, called "one and many" assumes that it is possible to obtain information if a given oligonucleotide occurs in a reconstructed sequence once or more than once. According to the second model, called "one, two and many", one is able to receive from biochemical experiment information if a given oligonucleotide is present in an analyzed sequence once, twice or at least three times. An ant colony optimization algorithm has been implemented to verify the above models and to compare with existing algorithms for sequencing by hybridization which utilize the additional information. The proposed algorithm solves the problem with any kind of hybridization errors. Computational experiment results confirm that using even the partial information about multiplicity leads to increased quality of reconstructed sequences. Moreover, they also show that the more precise model enables to obtain better solutions and the ant colony optimization algorithm outperforms the existing ones. Test data sets and the proposed ant colony optimization algorithm are available on: http://bioserver.cs.put.poznan.pl/download/ACO4mSBH.zip. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. A strategy for quantum algorithm design assisted by machine learning

    NASA Astrophysics Data System (ADS)

    Bang, Jeongho; Ryu, Junghee; Yoo, Seokwon; Pawłowski, Marcin; Lee, Jinhyoung

    2014-07-01

    We propose a method for quantum algorithm design assisted by machine learning. The method uses a quantum-classical hybrid simulator, where a ‘quantum student’ is being taught by a ‘classical teacher’. In other words, in our method, the learning system is supposed to evolve into a quantum algorithm for a given problem, assisted by a classical main-feedback system. Our method is applicable for designing quantum oracle-based algorithms. We chose, as a case study, an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using Monte Carlo simulations that our simulator can faithfully learn a quantum algorithm for solving the problem for a given oracle. Remarkably, the learning time is proportional to the square root of the total number of parameters, rather than showing the exponential dependence found in the classical machine learning-based method.

  15. Quo vadis: Hydrologic inverse analyses using high-performance computing and a D-Wave quantum annealer

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2017-12-01

    Classical microprocessors have had a dramatic impact on hydrology for decades, due largely to the exponential growth in computing power predicted by Moore's law. However, this growth is not expected to continue indefinitely and has already begun to slow. Quantum computing is an emerging alternative to classical microprocessors. Here, we demonstrated cutting edge inverse model analyses utilizing some of the best available resources in both worlds: high-performance classical computing and a D-Wave quantum annealer. The classical high-performance computing resources are utilized to build an advanced numerical model that assimilates data from O(10^5) observations, including water levels, drawdowns, and contaminant concentrations. The developed model accurately reproduces the hydrologic conditions at a Los Alamos National Laboratory contamination site, and can be leveraged to inform decision-making about site remediation. We demonstrate the use of a D-Wave 2X quantum annealer to solve hydrologic inverse problems. This work can be seen as an early step in quantum-computational hydrology. We compare and contrast our results with an early inverse approach in classical-computational hydrology that is comparable to the approach we use with quantum annealing. Our results show that quantum annealing can be useful for identifying regions of high and low permeability within an aquifer. While the problems we consider are small-scale compared to the problems that can be solved with modern classical computers, they are large compared to the problems that could be solved with early classical CPUs. Further, the binary nature of the high/low permeability problem makes it well-suited to quantum annealing, but challenging for classical computers.

  16. An extension of the directed search domain algorithm to bilevel optimization

    NASA Astrophysics Data System (ADS)

    Wang, Kaiqiang; Utyuzhnikov, Sergey V.

    2017-08-01

    A method is developed for generating a well-distributed Pareto set for the upper level in bilevel multiobjective optimization. The approach is based on the Directed Search Domain (DSD) algorithm, which is a classical approach for generation of a quasi-evenly distributed Pareto set in multiobjective optimization. The approach contains a double-layer optimizer designed in a specific way under the framework of the DSD method. The double-layer optimizer is based on bilevel single-objective optimization and aims to find a unique optimal Pareto solution rather than generate the whole Pareto frontier on the lower level in order to improve the optimization efficiency. The proposed bilevel DSD approach is verified on several test cases, and a relevant comparison against another classical approach is made. It is shown that the approach can generate a quasi-evenly distributed Pareto set for the upper level with relatively low time consumption.

  17. Two phase genetic algorithm for vehicle routing and scheduling problem with cross-docking and time windows considering customer satisfaction

    NASA Astrophysics Data System (ADS)

    Baniamerian, Ali; Bashiri, Mahdi; Zabihi, Fahime

    2018-03-01

    Cross-docking is a new warehousing policy in logistics which is widely used all over the world and attracts many researchers attention to study about in last decade. In the literature, economic aspects has been often studied, while one of the most significant factors for being successful in the competitive global market is improving quality of customer servicing and focusing on customer satisfaction. In this paper, we introduce a vehicle routing and scheduling problem with cross-docking and time windows in a three-echelon supply chain that considers customer satisfaction. A set of homogeneous vehicles collect products from suppliers and after consolidation process in the cross-dock, immediately deliver them to customers. A mixed integer linear programming model is presented for this problem to minimize transportation cost and early/tardy deliveries with scheduling of inbound and outbound vehicles to increase customer satisfaction. A two phase genetic algorithm (GA) is developed for the problem. For investigating the performance of the algorithm, it was compared with exact and lower bound solutions in small and large-size instances, respectively. Results show that there are at least 86.6% customer satisfaction by the proposed method, whereas customer satisfaction in the classical model is at most 33.3%. Numerical examples results show that the proposed two phase algorithm could achieve optimal solutions in small-size instances. Also in large-size instances, the proposed two phase algorithm could achieve better solutions with less gap from the lower bound in less computational time in comparison with the classic GA.

  18. Environmental Monitoring Networks Optimization Using Advanced Active Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail; Volpi, Michele; Copa, Loris

    2010-05-01

    The problem of environmental monitoring networks optimization (MNO) belongs to one of the basic and fundamental tasks in spatio-temporal data collection, analysis, and modeling. There are several approaches to this problem, which can be considered as a design or redesign of monitoring network by applying some optimization criteria. The most developed and widespread methods are based on geostatistics (family of kriging models, conditional stochastic simulations). In geostatistics the variance is mainly used as an optimization criterion which has some advantages and drawbacks. In the present research we study an application of advanced techniques following from the statistical learning theory (SLT) - support vector machines (SVM) and the optimization of monitoring networks when dealing with a classification problem (data are discrete values/classes: hydrogeological units, soil types, pollution decision levels, etc.) is considered. SVM is a universal nonlinear modeling tool for classification problems in high dimensional spaces. The SVM solution is maximizing the decision boundary between classes and has a good generalization property for noisy data. The sparse solution of SVM is based on support vectors - data which contribute to the solution with nonzero weights. Fundamentally the MNO for classification problems can be considered as a task of selecting new measurement points which increase the quality of spatial classification and reduce the testing error (error on new independent measurements). In SLT this is a typical problem of active learning - a selection of the new unlabelled points which efficiently reduce the testing error. A classical approach (margin sampling) to active learning is to sample the points closest to the classification boundary. This solution is suboptimal when points (or generally the dataset) are redundant for the same class. In the present research we propose and study two new advanced methods of active learning adapted to the solution of MNO problem: 1) hierarchical top-down clustering in an input space in order to remove redundancy when data are clustered, and 2) a general method (independent on classifier) which gives posterior probabilities that can be used to define the classifier confidence and corresponding proposals for new measurement points. The basic ideas and procedures are explained by applying simulated data sets. The real case study deals with the analysis and mapping of soil types, which is a multi-class classification problem. Maps of soil types are important for the analysis and 3D modeling of heavy metals migration in soil and prediction risk mapping. The results obtained demonstrate the high quality of SVM mapping and efficiency of monitoring network optimization by using active learning approaches. The research was partly supported by SNSF projects No. 200021-126505 and 200020-121835.

  19. Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics

    ERIC Educational Resources Information Center

    Schlitt, D. W.

    1977-01-01

    Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)

  20. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  1. Stochastic modelling of turbulent combustion for design optimization of gas turbine combustors

    NASA Astrophysics Data System (ADS)

    Mehanna Ismail, Mohammed Ali

    The present work covers the development and the implementation of an efficient algorithm for the design optimization of gas turbine combustors. The purpose is to explore the possibilities and indicate constructive suggestions for optimization techniques as alternative methods for designing gas turbine combustors. The algorithm is general to the extent that no constraints are imposed on the combustion phenomena or on the combustor configuration. The optimization problem is broken down into two elementary problems: the first is the optimum search algorithm, and the second is the turbulent combustion model used to determine the combustor performance parameters. These performance parameters constitute the objective and physical constraints in the optimization problem formulation. The examination of both turbulent combustion phenomena and the gas turbine design process suggests that the turbulent combustion model represents a crucial part of the optimization algorithm. The basic requirements needed for a turbulent combustion model to be successfully used in a practical optimization algorithm are discussed. In principle, the combustion model should comply with the conflicting requirements of high fidelity, robustness and computational efficiency. To that end, the problem of turbulent combustion is discussed and the current state of the art of turbulent combustion modelling is reviewed. According to this review, turbulent combustion models based on the composition PDF transport equation are found to be good candidates for application in the present context. However, these models are computationally expensive. To overcome this difficulty, two different models based on the composition PDF transport equation were developed: an improved Lagrangian Monte Carlo composition PDF algorithm and the generalized stochastic reactor model. Improvements in the Lagrangian Monte Carlo composition PDF model performance and its computational efficiency were achieved through the implementation of time splitting, variable stochastic fluid particle mass control, and a second order time accurate (predictor-corrector) scheme used for solving the stochastic differential equations governing the particles evolution. The model compared well against experimental data found in the literature for two different configurations: bluff body and swirl stabilized combustors. The generalized stochastic reactor is a newly developed model. This model relies on the generalization of the concept of the classical stochastic reactor theory in the sense that it accounts for both finite micro- and macro-mixing processes. (Abstract shortened by UMI.)

  2. A disturbance based control/structure design algorithm

    NASA Technical Reports Server (NTRS)

    Mclaren, Mark D.; Slater, Gary L.

    1989-01-01

    Some authors take a classical approach to the simultaneous structure/control optimization by attempting to simultaneously minimize the weighted sum of the total mass and a quadratic form, subject to all of the structural and control constraints. Here, the optimization will be based on the dynamic response of a structure to an external unknown stochastic disturbance environment. Such a response to excitation approach is common to both the structural and control design phases, and hence represents a more natural control/structure optimization strategy than relying on artificial and vague control penalties. The design objective is to find the structure and controller of minimum mass such that all the prescribed constraints are satisfied. Two alternative solution algorithms are presented which have been applied to this problem. Each algorithm handles the optimization strategy and the imposition of the nonlinear constraints in a different manner. Two controller methodologies, and their effect on the solution algorithm, will be considered. These are full state feedback and direct output feedback, although the problem formulation is not restricted solely to these forms of controller. In fact, although full state feedback is a popular choice among researchers in this field (for reasons that will become apparent), its practical application is severely limited. The controller/structure interaction is inserted by the imposition of appropriate closed-loop constraints, such as closed-loop output response and control effort constraints. Numerical results will be obtained for a representative flexible structure model to illustrate the effectiveness of the solution algorithms.

  3. Quantum vertex model for reversible classical computing.

    PubMed

    Chamon, C; Mucciolo, E R; Ruckenstein, A E; Yang, Z-C

    2017-05-12

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without 'learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  4. Quantum vertex model for reversible classical computing

    NASA Astrophysics Data System (ADS)

    Chamon, C.; Mucciolo, E. R.; Ruckenstein, A. E.; Yang, Z.-C.

    2017-05-01

    Mappings of classical computation onto statistical mechanics models have led to remarkable successes in addressing some complex computational problems. However, such mappings display thermodynamic phase transitions that may prevent reaching solution even for easy problems known to be solvable in polynomial time. Here we map universal reversible classical computations onto a planar vertex model that exhibits no bulk classical thermodynamic phase transition, independent of the computational circuit. Within our approach the solution of the computation is encoded in the ground state of the vertex model and its complexity is reflected in the dynamics of the relaxation of the system to its ground state. We use thermal annealing with and without `learning' to explore typical computational problems. We also construct a mapping of the vertex model into the Chimera architecture of the D-Wave machine, initiating an approach to reversible classical computation based on state-of-the-art implementations of quantum annealing.

  5. Characterization of classical static noise via qubit as probe

    NASA Astrophysics Data System (ADS)

    Javed, Muhammad; Khan, Salman; Ullah, Sayed Arif

    2018-03-01

    The dynamics of quantum Fisher information (QFI) of a single qubit coupled to classical static noise is investigated. The analytical relation for QFI fixes the optimal initial state of the qubit that maximizes it. An approximate limit for the time of coupling that leads to physically useful results is identified. Moreover, using the approach of quantum estimation theory and the analytical relation for QFI, the qubit is used as a probe to precisely estimate the disordered parameter of the environment. Relation for optimal interaction time with the environment is obtained, and condition for the optimal measurement of the noise parameter of the environment is given. It is shown that all values, in the mentioned range, of the noise parameter are estimable with equal precision. A comparison of our results with the previous studies in different classical environments is made.

  6. Least-squares/parabolized Navier-Stokes procedure for optimizing hypersonic wind tunnel nozzles

    NASA Technical Reports Server (NTRS)

    Korte, John J.; Kumar, Ajay; Singh, D. J.; Grossman, B.

    1991-01-01

    A new procedure is demonstrated for optimizing hypersonic wind-tunnel-nozzle contours. The procedure couples a CFD computer code to an optimization algorithm, and is applied to both conical and contoured hypersonic nozzles for the purpose of determining an optimal set of parameters to describe the surface geometry. A design-objective function is specified based on the deviation from the desired test-section flow-field conditions. The objective function is minimized by optimizing the parameters used to describe the nozzle contour based on the solution to a nonlinear least-squares problem. The effect of the changes in the nozzle wall parameters are evaluated by computing the nozzle flow using the parabolized Navier-Stokes equations. The advantage of the new procedure is that it directly takes into account the displacement effect of the boundary layer on the wall contour. The new procedure provides a method for optimizing hypersonic nozzles of high Mach numbers which have been designed by classical procedures, but are shown to produce poor flow quality due to the large boundary layers present in the test section. The procedure is demonstrated by finding the optimum design parameters for a Mach 10 conical nozzle and a Mach 6 and a Mach 15 contoured nozzle.

  7. Device-Independent Tests of Classical and Quantum Dimensions

    NASA Astrophysics Data System (ADS)

    Gallego, Rodrigo; Brunner, Nicolas; Hadley, Christopher; Acín, Antonio

    2010-12-01

    We address the problem of testing the dimensionality of classical and quantum systems in a “black-box” scenario. We develop a general formalism for tackling this problem. This allows us to derive lower bounds on the classical dimension necessary to reproduce given measurement data. Furthermore, we generalize the concept of quantum dimension witnesses to arbitrary quantum systems, allowing one to place a lower bound on the Hilbert space dimension necessary to reproduce certain data. Illustrating these ideas, we provide simple examples of classical and quantum dimension witnesses.

  8. Heuristic rules embedded genetic algorithm for in-core fuel management optimization

    NASA Astrophysics Data System (ADS)

    Alim, Fatih

    The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.

  9. Locking classical correlations in quantum States.

    PubMed

    DiVincenzo, David P; Horodecki, Michał; Leung, Debbie W; Smolin, John A; Terhal, Barbara M

    2004-02-13

    We show that there exist bipartite quantum states which contain a large locked classical correlation that is unlocked by a disproportionately small amount of classical communication. In particular, there are (2n+1)-qubit states for which a one-bit message doubles the optimal classical mutual information between measurement results on the subsystems, from n/2 bits to n bits. This phenomenon is impossible classically. However, states exhibiting this behavior need not be entangled. We study the range of states exhibiting this phenomenon and bound its magnitude.

  10. Advances and trends in structural and solid mechanics; Proceedings of the Symposium, Washington, DC, October 4-7, 1982

    NASA Technical Reports Server (NTRS)

    Noor, A. K. (Editor); Housner, J. M.

    1983-01-01

    The mechanics of materials and material characterization are considered, taking into account micromechanics, the behavior of steel structures at elevated temperatures, and an anisotropic plasticity model for inelastic multiaxial cyclic deformation. Other topics explored are related to advances and trends in finite element technology, classical analytical techniques and their computer implementation, interactive computing and computational strategies for nonlinear problems, advances and trends in numerical analysis, database management systems and CAD/CAM, space structures and vehicle crashworthiness, beams, plates and fibrous composite structures, design-oriented analysis, artificial intelligence and optimization, contact problems, random waves, and lifetime prediction. Earthquake-resistant structures and other advanced structural applications are also discussed, giving attention to cumulative damage in steel structures subjected to earthquake ground motions, and a mixed domain analysis of nuclear containment structures using impulse functions.

  11. Phase unwrapping using region-based markov random field model.

    PubMed

    Dong, Ying; Ji, Jim

    2010-01-01

    Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.

  12. Finite-size effect on optimal efficiency of heat engines.

    PubMed

    Tajima, Hiroyasu; Hayashi, Masahito

    2017-07-01

    The optimal efficiency of quantum (or classical) heat engines whose heat baths are n-particle systems is given by the strong large deviation. We give the optimal work extraction process as a concrete energy-preserving unitary time evolution among the heat baths and the work storage. We show that our optimal work extraction turns the disordered energy of the heat baths to the ordered energy of the work storage, by evaluating the ratio of the entropy difference to the energy difference in the heat baths and the work storage, respectively. By comparing the statistical mechanical optimal efficiency with the macroscopic thermodynamic bound, we evaluate the accuracy of the macroscopic thermodynamics with finite-size heat baths from the statistical mechanical viewpoint. We also evaluate the quantum coherence effect on the optimal efficiency of the cycle processes without restricting their cycle time by comparing the classical and quantum optimal efficiencies.

  13. Further Development of an Optimal Design Approach Applied to Axial Magnetic Bearings

    NASA Technical Reports Server (NTRS)

    Bloodgood, V. Dale, Jr.; Groom, Nelson J.; Britcher, Colin P.

    2000-01-01

    Classical design methods involved in magnetic bearings and magnetic suspension systems have always had their limitations. Because of this, the overall effectiveness of a design has always relied heavily on the skill and experience of the individual designer. This paper combines two approaches that have been developed to aid the accuracy and efficiency of magnetostatic design. The first approach integrates classical magnetic circuit theory with modern optimization theory to increase design efficiency. The second approach uses loss factors to increase the accuracy of classical magnetic circuit theory. As an example, an axial magnetic thrust bearing is designed for minimum power.

  14. Economic evaluation of genomic selection in small ruminants: a sheep meat breeding program.

    PubMed

    Shumbusho, F; Raoul, J; Astruc, J M; Palhiere, I; Lemarié, S; Fugeray-Scarbel, A; Elsen, J M

    2016-06-01

    Recent genomic evaluation studies using real data and predicting genetic gain by modeling breeding programs have reported moderate expected benefits from the replacement of classic selection schemes by genomic selection (GS) in small ruminants. The objectives of this study were to compare the cost, monetary genetic gain and economic efficiency of classic selection and GS schemes in the meat sheep industry. Deterministic methods were used to model selection based on multi-trait indices from a sheep meat breeding program. Decisional variables related to male selection candidates and progeny testing were optimized to maximize the annual monetary genetic gain (AMGG), that is, a weighted sum of meat and maternal traits annual genetic gains. For GS, a reference population of 2000 individuals was assumed and genomic information was available for evaluation of male candidates only. In the classic selection scheme, males breeding values were estimated from own and offspring phenotypes. In GS, different scenarios were considered, differing by the information used to select males (genomic only, genomic+own performance, genomic+offspring phenotypes). The results showed that all GS scenarios were associated with higher total variable costs than classic selection (if the cost of genotyping was 123 euros/animal). In terms of AMGG and economic returns, GS scenarios were found to be superior to classic selection only if genomic information was combined with their own meat phenotypes (GS-Pheno) or with their progeny test information. The predicted economic efficiency, defined as returns (proportional to number of expressions of AMGG in the nucleus and commercial flocks) minus total variable costs, showed that the best GS scenario (GS-Pheno) was up to 15% more efficient than classic selection. For all selection scenarios, optimization increased the overall AMGG, returns and economic efficiency. As a conclusion, our study shows that some forms of GS strategies are more advantageous than classic selection, provided that GS is already initiated (i.e. the initial reference population is available). Optimizing decisional variables of the classic selection scheme could be of greater benefit than including genomic information in optimized designs.

  15. Fisher information and asymptotic normality in system identification for quantum Markov chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guta, Madalin

    2011-06-15

    This paper deals with the problem of estimating the coupling constant {theta} of a mixing quantum Markov chain. For a repeated measurement on the chain's output we show that the outcomes' time average has an asymptotically normal (Gaussian) distribution, and we give the explicit expressions of its mean and variance. In particular, we obtain a simple estimator of {theta} whose classical Fisher information can be optimized over different choices of measured observables. We then show that the quantum state of the output together with the system is itself asymptotically Gaussian and compute its quantum Fisher information, which sets an absolutemore » bound to the estimation error. The classical and quantum Fisher information are compared in a simple example. In the vicinity of {theta}=0 we find that the quantum Fisher information has a quadratic rather than linear scaling in output size, and asymptotically the Fisher information is localized in the system, while the output is independent of the parameter.« less

  16. Exhaustive Classification of the Invariant Solutions for a Specific Nonlinear Model Describing Near Planar and Marginally Long-Wave Unstable Interfaces for Phase Transition

    NASA Astrophysics Data System (ADS)

    Ahangari, Fatemeh

    2018-05-01

    Problems of thermodynamic phase transition originate inherently in solidification, combustion and various other significant fields. If the transition region among two locally stable phases is adequately narrow, the dynamics can be modeled by an interface motion. This paper is devoted to exhaustive analysis of the invariant solutions for a modified Kuramoto-Sivashinsky equation in two spatial and one temporal dimensions is presented. This nonlinear partial differential equation asymptotically characterizes near planar interfaces, which are marginally long-wave unstable. For this purpose, by applying the classical symmetry method for this model the classical symmetry operators are attained. Moreover, the structure of the Lie algebra of symmetries is discussed and the optimal system of subalgebras, which yields the preliminary classification of group invariant solutions is constructed. Mainly, the Lie invariants corresponding to the infinitesimal symmetry generators as well as associated similarity reduced equations are also pointed out. Furthermore, the nonclassical symmetries of this nonlinear PDE are also comprehensively investigated.

  17. Conserved Quantities in General Relativity: From the Quasi-Local Level to Spatial Infinity

    NASA Astrophysics Data System (ADS)

    Chen, Po-Ning; Wang, Mu-Tao; Yau, Shing-Tung

    2015-08-01

    We define quasi-local conserved quantities in general relativity by using the optimal isometric embedding in Wang and Yau (Commun Math Phys 288(3):919-942, 2009) to transplant Killing fields in the Minkowski spacetime back to the 2-surface of interest in a physical spacetime. To each optimal isometric embedding, a dual element of the Lie algebra of the Lorentz group is assigned. Quasi-local angular momentum and quasi-local center of mass correspond to pairing this element with rotation Killing fields and boost Killing fields, respectively. They obey classical transformation laws under the action of the Poincaré group. We further justify these definitions by considering their limits as the total angular momentum and the total center of mass of an isolated system. These expressions were derived from the Hamilton-Jacobi analysis of the gravitational action and thus satisfy conservation laws. As a result, we obtained an invariant total angular momentum theorem in the Kerr spacetime. For a vacuum asymptotically flat initial data set of order 1, it is shown that the limits are always finite without any extra assumptions. We also study these total conserved quantities on a family of asymptotically flat initial data sets evolving by the vacuum Einstein evolution equation. It is shown that the total angular momentum is conserved under the evolution. For the total center of mass, the classical dynamical formula relating the center of mass, energy, and linear momentum is recovered, in the nonlinear context of initial data sets evolving by the vacuum Einstein evolution equation. The definition of quasi-local angular momentum provides an answer to the second problem in classical general relativity on Penrose's list (Proc R Soc Lond Ser A 381(1780):53-63, 1982).

  18. Generalized probabilistic theories and conic extensions of polytopes

    NASA Astrophysics Data System (ADS)

    Fiorini, Samuel; Massar, Serge; Patra, Manas K.; Tiwary, Hans Raj

    2015-01-01

    Generalized probabilistic theories (GPT) provide a general framework that includes classical and quantum theories. It is described by a cone C and its dual C*. We show that whether some one-way communication complexity problems can be solved within a GPT is equivalent to the recently introduced cone factorization of the corresponding communication matrix M. We also prove an analogue of Holevo's theorem: when the cone C is contained in {{{R}}n}, the classical capacity of the channel realized by sending GPT states and measuring them is bounded by log n. Polytopes and optimising functions over polytopes arise in many areas of discrete mathematics. A conic extension of a polytope is the intersection of a cone C with an affine subspace whose projection onto the original space yields the desired polytope. Extensions of polytopes can sometimes be much simpler geometric objects than the polytope itself. The existence of a conic extension of a polytope is equivalent to that of a cone factorization of the slack matrix of the polytope, on the same cone. We show that all 0/1 polytopes whose vertices can be recognized by a polynomial size circuit, which includes as a special case the travelling salesman polytope and many other polytopes from combinatorial optimization, have small conic extension complexity when the cone is the completely positive cone. Using recent exponential lower bounds on the linear extension complexity of polytopes, this provides an exponential gap between the communication complexity of GPT based on the completely positive cone and classical communication complexity, and a conjectured exponential gap with quantum communication complexity. Our work thus relates the communication complexity of generalizations of quantum theory to questions of mainstream interest in the area of combinatorial optimization.

  19. Lack of a thermodynamic finite-temperature spin-glass phase in the two-dimensional randomly coupled ferromagnet

    NASA Astrophysics Data System (ADS)

    Zhu, Zheng; Ochoa, Andrew J.; Katzgraber, Helmut G.

    2018-05-01

    The search for problems where quantum adiabatic optimization might excel over classical optimization techniques has sparked a recent interest in inducing a finite-temperature spin-glass transition in quasiplanar topologies. We have performed large-scale finite-temperature Monte Carlo simulations of a two-dimensional square-lattice bimodal spin glass with next-nearest ferromagnetic interactions claimed to exhibit a finite-temperature spin-glass state for a particular relative strength of the next-nearest to nearest interactions [Phys. Rev. Lett. 76, 4616 (1996), 10.1103/PhysRevLett.76.4616]. Our results show that the system is in a paramagnetic state in the thermodynamic limit, despite zero-temperature simulations [Phys. Rev. B 63, 094423 (2001), 10.1103/PhysRevB.63.094423] suggesting the existence of a finite-temperature spin-glass transition. Therefore, deducing the finite-temperature behavior from zero-temperature simulations can be dangerous when corrections to scaling are large.

  20. Improving Efficiency in Multi-Strange Baryon Reconstruction in d-Au at STAR

    NASA Astrophysics Data System (ADS)

    Leight, William

    2003-10-01

    We report preliminary multi-strange baryon measurements for d-Au collisions recorded at RHIC by the STAR experiment. After using classical topological analysis, in which cuts for each discriminating variable are adjusted by hand, we investigate improvements in signal-to-noise optimization using Linear Discriminant Analysis (LDA). LDA is an algorithm for finding, in the n-dimensional space of the n discriminating variables, the axis on which the signal and noise distributions are most separated. LDA is the first step in moving towards more sophisticated techniques for signal-to-noise optimization, such as Artificial Neural Nets. Due to the relatively low background and sufficiently high yields of d-Au collisions, they form an ideal system to study these possibilities for improving reconstruction methods. Such improvements will be extremely important for forthcoming Au-Au runs in which the size of the combinatoric background is a major problem in reconstruction efforts.

  1. Elements of an algorithm for optimizing a parameter-structural neural network

    NASA Astrophysics Data System (ADS)

    Mrówczyńska, Maria

    2016-06-01

    The field of processing information provided by measurement results is one of the most important components of geodetic technologies. The dynamic development of this field improves classic algorithms for numerical calculations in the aspect of analytical solutions that are difficult to achieve. Algorithms based on artificial intelligence in the form of artificial neural networks, including the topology of connections between neurons have become an important instrument connected to the problem of processing and modelling processes. This concept results from the integration of neural networks and parameter optimization methods and makes it possible to avoid the necessity to arbitrarily define the structure of a network. This kind of extension of the training process is exemplified by the algorithm called the Group Method of Data Handling (GMDH), which belongs to the class of evolutionary algorithms. The article presents a GMDH type network, used for modelling deformations of the geometrical axis of a steel chimney during its operation.

  2. Jerk Minimization Method for Vibration Control in Buildings

    NASA Technical Reports Server (NTRS)

    Abatan, Ayo O.; Yao, Leummim

    1997-01-01

    In many vibration minimization control problems for high rise buildings subject to strong earthquake loads, the emphasis has been on a combination of minimizing the displacement, the velocity and the acceleration of the motion of the building. In most cases, the accelerations that are involved are not necessarily large but the change in them (jerk) are abrupt. These changes in magnitude or direction are responsible for most building damage and also create discomfort like motion sickness for inhabitants of these structures because of the element of surprise. We propose a method of minimizing also the jerk which is the sudden change in acceleration or the derivative of the acceleration using classical linear quadratic optimal controls. This was done through the introduction of a quadratic performance index involving the cost due to the jerk; a special change of variable; and using the jerk as a control variable. The values of the optimal control are obtained using the Riccati equation.

  3. Implementation of quantum game theory simulations using Python

    NASA Astrophysics Data System (ADS)

    Madrid S., A.

    2013-05-01

    This paper provides some examples about quantum games simulated in Python's programming language. The quantum games have been developed with the Sympy Python library, which permits solving quantum problems in a symbolic form. The application of these methods of quantum mechanics to game theory gives us more possibility to achieve results not possible before. To illustrate the results of these methods, in particular, there have been simulated the quantum battle of the sexes, the prisoner's dilemma and card games. These solutions are able to exceed the classic bottle neck and obtain optimal quantum strategies. In this form, python demonstrated that is possible to do more advanced and complicated quantum games algorithms.

  4. The Propulsive-Only Flight Control Problem

    NASA Technical Reports Server (NTRS)

    Blezad, Daniel J.

    1996-01-01

    Attitude control of aircraft using only the throttles is investigated. The long time constants of both the engines and of the aircraft dynamics, together with the coupling between longitudinal and lateral aircraft modes make piloted flight with failed control surfaces hazardous, especially when attempting to land. This research documents the results of in-flight operation using simulated failed flight controls and ground simulations of piloted propulsive-only control to touchdown. Augmentation control laws to assist the pilot are described using both optimal control and classical feedback methods. Piloted simulation using augmentation shows that simple and effective augmented control can be achieved in a wide variety of failed configurations.

  5. Jamming Attack in Wireless Sensor Network: From Time to Space

    NASA Astrophysics Data System (ADS)

    Sun, Yanqiang; Wang, Xiaodong; Zhou, Xingming

    Classical jamming attack models in the time domain have been proposed, such as constant jammer, random jammer, and reactive jammer. In this letter, we consider a new problem: given k jammers, how does the attacker minimize the pair-wise connectivity among the nodes in a Wireless Sensor Network (WSN)? We call this problem k-Jammer Deployment Problem (k-JDP). To the best of our knowledge, this is the first attempt at considering the position-critical jamming attack against wireless sensor network. We mainly make three contributions. First, we prove that the decision version of k-JDP is NP-complete even in the ideal situation where the attacker has full knowledge of the topology information of sensor network. Second, we propose a mathematical formulation based on Integer Programming (IP) model which yields an optimal solution. Third, we present a heuristic algorithm HAJDP, and compare it with the IP model. Numerical results show that our heuristic algorithm is computationally efficient.

  6. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    PubMed

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Classical Electrodynamics: Lecture notes

    NASA Astrophysics Data System (ADS)

    Likharev, Konstantin K.

    2018-06-01

    Essential Advanced Physics is a series comprising four parts: Classical Mechanics, Classical Electrodynamics, Quantum Mechanics and Statistical Mechanics. Each part consists of two volumes, Lecture notes and Problems with solutions, further supplemented by an additional collection of test problems and solutions available to qualifying university instructors. This volume, Classical Electrodynamics: Lecture notes is intended to be the basis for a two-semester graduate-level course on electricity and magnetism, including not only the interaction and dynamics charged point particles, but also properties of dielectric, conducting, and magnetic media. The course also covers special relativity, including its kinematics and particle-dynamics aspects, and electromagnetic radiation by relativistic particles.

  8. Statistical mechanics based on fractional classical and quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korichi, Z.; Meftah, M. T., E-mail: mewalid@yahoo.com

    2014-03-15

    The purpose of this work is to study some problems in statistical mechanics based on the fractional classical and quantum mechanics. At first stage we have presented the thermodynamical properties of the classical ideal gas and the system of N classical oscillators. In both cases, the Hamiltonian contains fractional exponents of the phase space (position and momentum). At the second stage, in the context of the fractional quantum mechanics, we have calculated the thermodynamical properties for the black body radiation, studied the Bose-Einstein statistics with the related problem of the condensation and the Fermi-Dirac statistics.

  9. Observations of classical cepheids

    NASA Technical Reports Server (NTRS)

    Pel, J. W.

    1980-01-01

    The observations of classical Cepheids are reviewed. The main progress that has been made is summarized and some of the problems yet to be solved are discussed. The problems include color excesses, calibration of color, duplicity, ultraviolet colors, temperature-color relations, mass discrepancies, and radius determination.

  10. Applications and error correction for adiabatic quantum optimization

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen

    Adiabatic quantum optimization (AQO) is a fast-developing subfield of quantum information processing which holds great promise in the relatively near future. Here we develop an application, quantum anomaly detection, and an error correction code, Quantum Annealing Correction (QAC), for use with AQO. The motivation for the anomaly detection algorithm is the problematic nature of classical software verification and validation (V&V). The number of lines of code written for safety-critical applications such as cars and aircraft increases each year, and with it the cost of finding errors grows exponentially (the cost of overlooking errors, which can be measured in human safety, is arguably even higher). We approach the V&V problem by using a quantum machine learning algorithm to identify charateristics of software operations that are implemented outside of specifications, then define an AQO to return these anomalous operations as its result. Our error correction work is the first large-scale experimental demonstration of quantum error correcting codes. We develop QAC and apply it to USC's equipment, the first and second generation of commercially available D-Wave AQO processors. We first show comprehensive experimental results for the code's performance on antiferromagnetic chains, scaling the problem size up to 86 logical qubits (344 physical qubits) and recovering significant encoded success rates even when the unencoded success rates drop to almost nothing. A broader set of randomized benchmarking problems is then introduced, for which we observe similar behavior to the antiferromagnetic chain, specifically that the use of QAC is almost always advantageous for problems of sufficient size and difficulty. Along the way, we develop problem-specific optimizations for the code and gain insight into the various on-chip error mechanisms (most prominently thermal noise, since the hardware operates at finite temperature) and the ways QAC counteracts them. We finish by showing that the scheme is robust to qubit loss on-chip, a significant benefit when considering an implemented system.

  11. Identification of groundwater flow parameters using reciprocal data from hydraulic interference tests

    NASA Astrophysics Data System (ADS)

    Marinoni, Marianna; Delay, Frederick; Ackerer, Philippe; Riva, Monica; Guadagnini, Alberto

    2016-08-01

    We investigate the effect of considering reciprocal drawdown curves for the characterization of hydraulic properties of aquifer systems through inverse modeling based on interference well testing. Reciprocity implies that drawdown observed in a well B when pumping takes place from well A should strictly coincide with the drawdown observed in A when pumping in B with the same flow rate as in A. In this context, a critical point related to applications of hydraulic tomography is the assessment of the number of available independent drawdown data and their impact on the solution of the inverse problem. The issue arises when inverse modeling relies upon mathematical formulations of the classical single-continuum approach to flow in porous media grounded on Darcy's law. In these cases, introducing reciprocal drawdown curves in the database of an inverse problem is equivalent to duplicate some information, to a certain extent. We present a theoretical analysis of the way a Least-Square objective function and a Levenberg-Marquardt minimization algorithm are affected by the introduction of reciprocal information in the inverse problem. We also investigate the way these reciprocal data, eventually corrupted by measurement errors, influence model parameter identification in terms of: (a) the convergence of the inverse model, (b) the optimal values of parameter estimates, and (c) the associated estimation uncertainty. Our theoretical findings are exemplified through a suite of computational examples focused on block-heterogeneous systems with increased complexity level. We find that the introduction of noisy reciprocal information in the objective function of the inverse problem has a very limited influence on the optimal parameter estimates. Convergence of the inverse problem improves when adding diverse (nonreciprocal) drawdown series, but does not improve when reciprocal information is added to condition the flow model. The uncertainty on optimal parameter estimates is influenced by the strength of measurement errors and it is not significantly diminished or increased by adding noisy reciprocal information.

  12. A new graph-based method for pairwise global network alignment

    PubMed Central

    Klau, Gunnar W

    2009-01-01

    Background In addition to component-based comparative approaches, network alignments provide the means to study conserved network topology such as common pathways and more complex network motifs. Yet, unlike in classical sequence alignment, the comparison of networks becomes computationally more challenging, as most meaningful assumptions instantly lead to NP-hard problems. Most previous algorithmic work on network alignments is heuristic in nature. Results We introduce the graph-based maximum structural matching formulation for pairwise global network alignment. We relate the formulation to previous work and prove NP-hardness of the problem. Based on the new formulation we build upon recent results in computational structural biology and present a novel Lagrangian relaxation approach that, in combination with a branch-and-bound method, computes provably optimal network alignments. The Lagrangian algorithm alone is a powerful heuristic method, which produces solutions that are often near-optimal and – unlike those computed by pure heuristics – come with a quality guarantee. Conclusion Computational experiments on the alignment of protein-protein interaction networks and on the classification of metabolic subnetworks demonstrate that the new method is reasonably fast and has advantages over pure heuristics. Our software tool is freely available as part of the LISA library. PMID:19208162

  13. Active control of transmission loss with smart foams.

    PubMed

    Kundu, Abhishek; Berry, Alain

    2011-02-01

    Smart foams combine the complimentary advantages of passive foam material and spatially distributed piezoelectric actuator embedded in it for active noise control applications. In this paper, the problem of improving the transmission loss of smart foams using active control strategies has been investigated both numerically and experimentally inside a waveguide under the condition of plane wave propagation. The finite element simulation of a coupled noise control system has been undertaken with three different smart foam designs and their effectiveness in cancelling the transmitted wave downstream of the smart foam have been studied. The simulation results provide insight into the physical phenomenon of active noise cancellation and explain the impact of the smart foam designs on the optimal active control results. Experimental studies aimed at implementing the real-time control for transmission loss optimization have been performed using the classical single input/single output filtered-reference least mean squares algorithm. The active control results with broadband and single frequency primary source inputs demonstrate a good improvement in the transmission loss of the smart foams. The study gives a comparative description of the transmission and absorption control problems in light of the modification of the vibration response of the piezoelectric actuator under active control.

  14. Probabilistic numerics and uncertainty in computations

    PubMed Central

    Hennig, Philipp; Osborne, Michael A.; Girolami, Mark

    2015-01-01

    We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations. PMID:26346321

  15. Probabilistic numerics and uncertainty in computations.

    PubMed

    Hennig, Philipp; Osborne, Michael A; Girolami, Mark

    2015-07-08

    We deliver a call to arms for probabilistic numerical methods : algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. Such uncertainties, arising from the loss of precision induced by numerical calculation with limited time or hardware, are important for much contemporary science and industry. Within applications such as climate science and astrophysics, the need to make decisions on the basis of computations with large and complex data have led to a renewed focus on the management of numerical uncertainty. We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. We then show that the probabilistic view suggests new algorithms that can flexibly be adapted to suit application specifics, while delivering improved empirical performance. We provide concrete illustrations of the benefits of probabilistic numeric algorithms on real scientific problems from astrometry and astronomical imaging, while highlighting open problems with these new algorithms. Finally, we describe how probabilistic numerical methods provide a coherent framework for identifying the uncertainty in calculations performed with a combination of numerical algorithms (e.g. both numerical optimizers and differential equation solvers), potentially allowing the diagnosis (and control) of error sources in computations.

  16. Preliminary Work for Examining the Scalability of Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Clouse, Jeff

    1998-01-01

    Researchers began studying automated agents that learn to perform multiple-step tasks early in the history of artificial intelligence (Samuel, 1963; Samuel, 1967; Waterman, 1970; Fikes, Hart & Nilsonn, 1972). Multiple-step tasks are tasks that can only be solved via a sequence of decisions, such as control problems, robotics problems, classic problem-solving, and game-playing. The objective of agents attempting to learn such tasks is to use the resources they have available in order to become more proficient at the tasks. In particular, each agent attempts to develop a good policy, a mapping from states to actions, that allows it to select actions that optimize a measure of its performance on the task; for example, reducing the number of steps necessary to complete the task successfully. Our study focuses on reinforcement learning, a set of learning techniques where the learner performs trial-and-error experiments in the task and adapts its policy based on the outcome of those experiments. Much of the work in reinforcement learning has focused on a particular, simple representation, where every problem state is represented explicitly in a table, and associated with each state are the actions that can be chosen in that state. A major advantage of this table lookup representation is that one can prove that certain reinforcement learning techniques will develop an optimal policy for the current task. The drawback is that the representation limits the application of reinforcement learning to multiple-step tasks with relatively small state-spaces. There has been a little theoretical work that proves that convergence to optimal solutions can be obtained when using generalization structures, but the structures are quite simple. The theory says little about complex structures, such as multi-layer, feedforward artificial neural networks (Rumelhart & McClelland, 1986), but empirical results indicate that the use of reinforcement learning with such structures is promising. These empirical results make no theoretical claims, nor compare the policies produced to optimal policies. A goal of our work is to be able to make the comparison between an optimal policy and one stored in an artificial neural network. A difficulty of performing such a study is finding a multiple-step task that is small enough that one can find an optimal policy using table lookup, yet large enough that, for practical purposes, an artificial neural network is really required. We have identified a limited form of the game OTHELLO as satisfying these requirements. The work we report here is in the very preliminary stages of research, but this paper provides background for the problem being studied and a description of our initial approach to examining the problem. In the remainder of this paper, we first describe reinforcement learning in more detail. Next, we present the game OTHELLO. Finally we argue that a restricted form of the game meets the requirements of our study, and describe our preliminary approach to finding an optimal solution to the problem.

  17. Test-state approach to the quantum search problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sehrawat, Arun; Nguyen, Le Huy; Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore 117597

    2011-05-15

    The search for 'a quantum needle in a quantum haystack' is a metaphor for the problem of finding out which one of a permissible set of unitary mappings - the oracles - is implemented by a given black box. Grover's algorithm solves this problem with quadratic speedup as compared with the analogous search for 'a classical needle in a classical haystack'. Since the outcome of Grover's algorithm is probabilistic - it gives the correct answer with high probability, not with certainty - the answer requires verification. For this purpose we introduce specific test states, one for each oracle. These testmore » states can also be used to realize 'a classical search for the quantum needle' which is deterministic - it always gives a definite answer after a finite number of steps - and 3.41 times as fast as the purely classical search. Since the test-state search and Grover's algorithm look for the same quantum needle, the average number of oracle queries of the test-state search is the classical benchmark for Grover's algorithm.« less

  18. Numerical Simulation of the Francis Turbine and CAD used to Optimized the Runner Design (2nd).

    NASA Astrophysics Data System (ADS)

    Sutikno, Priyono

    2010-06-01

    Hydro Power is the most important renewable energy source on earth. The water is free of charge and with the generation of electric energy in a Hydroelectric Power station the production of green house gases (mainly CO2) is negligible. Hydro Power Generation Stations are long term installations and can be used for 50 years and more, care must be taken to guarantee a smooth and safe operation over the years. Maintenance is necessary and critical parts of the machines have to be replaced if necessary. Within modern engineering the numerical flow simulation plays an important role in order to optimize the hydraulic turbine in conjunction with connected components of the plant. Especially for rehabilitation and upgrading existing Power Plants important point of concern are to predict the power output of turbine, to achieve maximum hydraulic efficiency, to avoid or to minimize cavitations, to avoid or to minimized vibrations in whole range operation. Flow simulation can help to solve operational problems and to optimize the turbo machinery for hydro electric generating stations or their component through, intuitive optimization, mathematical optimization, parametric design, the reduction of cavitations through design, prediction of draft tube vortex, trouble shooting by using the simulation. The classic design through graphic-analytical method is cumbersome and can't give in evidence the positive or negative aspects of the designing options. So it was obvious to have imposed as necessity the classical design methods to an adequate design method using the CAD software. There are many option chose during design calculus in a specific step of designing may be verified in ensemble and detail form a point of view. The final graphic post processing would be realized only for the optimal solution, through a 3 D representation of the runner as a whole for the final approval geometric shape. In this article it was investigated the redesign of the hydraulic turbine's runner, medium head Francis type, with following value for the most important parameter, the rated specific speed ns.

  19. Multi-point Adjoint-Based Design of Tilt-Rotors in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Acree, Cecil W.

    2014-01-01

    Optimization of tilt-rotor systems requires the consideration of performance at multiple design points. In the current study, an adjoint-based optimization of a tilt-rotor blade is considered. The optimization seeks to simultaneously maximize the rotorcraft figure of merit in hover and the propulsive efficiency in airplane-mode for a tilt-rotor system. The design is subject to minimum thrust constraints imposed at each design point. The rotor flowfields at each design point are cast as steady-state problems in a noninertial reference frame. Geometric design variables used in the study to control blade shape include: thickness, camber, twist, and taper represented by as many as 123 separate design variables. Performance weighting of each operational mode is considered in the formulation of the composite objective function, and a build up of increasing geometric degrees of freedom is used to isolate the impact of selected design variables. In all cases considered, the resulting designs successfully increase both the hover figure of merit and the airplane-mode propulsive efficiency for a rotor designed with classical techniques.

  20. The q-G method : A q-version of the Steepest Descent method for global optimization.

    PubMed

    Soterroni, Aline C; Galski, Roberto L; Scarabello, Marluce C; Ramos, Fernando M

    2015-01-01

    In this work, the q-Gradient (q-G) method, a q-version of the Steepest Descent method, is presented. The main idea behind the q-G method is the use of the negative of the q-gradient vector of the objective function as the search direction. The q-gradient vector, or simply the q-gradient, is a generalization of the classical gradient vector based on the concept of Jackson's derivative from the q-calculus. Its use provides the algorithm an effective mechanism for escaping from local minima. The q-G method reduces to the Steepest Descent method when the parameter q tends to 1. The algorithm has three free parameters and it is implemented so that the search process gradually shifts from global exploration in the beginning to local exploitation in the end. We evaluated the q-G method on 34 test functions, and compared its performance with 34 optimization algorithms, including derivative-free algorithms and the Steepest Descent method. Our results show that the q-G method is competitive and has a great potential for solving multimodal optimization problems.

  1. Optimal management of a stochastically varying population when policy adjustment is costly.

    PubMed

    Boettiger, Carl; Bode, Michael; Sanchirico, James N; Lariviere, Jacob; Hastings, Alan; Armsworth, Paul R

    2016-04-01

    Ecological systems are dynamic and policies to manage them need to respond to that variation. However, policy adjustments will sometimes be costly, which means that fine-tuning a policy to track variability in the environment very tightly will only sometimes be worthwhile. We use a classic fisheries management problem, how to manage a stochastically varying population using annually varying quotas in order to maximize profit, to examine how costs of policy adjustment change optimal management recommendations. Costs of policy adjustment (changes in fishing quotas through time) could take different forms. For example, these costs may respond to the size of the change being implemented, or there could be a fixed cost any time a quota change is made. We show how different forms of policy costs have contrasting implications for optimal policies. Though it is frequently assumed that costs to adjusting policies will dampen variation in the policy, we show that certain cost structures can actually increase variation through time. We further show that failing to account for adjustment costs has a consistently worse economic impact than would assuming these costs are present when they are not.

  2. Teaching Classic Probability Problems With Modern Digital Tools

    ERIC Educational Resources Information Center

    Abramovich, Sergei; Nikitin, Yakov Yu.

    2017-01-01

    This article is written to share teaching ideas about using commonly available computer applications--a spreadsheet, "The Geometer's Sketchpad", and "Wolfram Alpha"--to explore three classic and historically significant problems from the probability theory. These ideas stem from the authors' work with prospective economists,…

  3. Improved color constancy in honey bees enabled by parallel visual projections from dorsal ocelli.

    PubMed

    Garcia, Jair E; Hung, Yu-Shan; Greentree, Andrew D; Rosa, Marcello G P; Endler, John A; Dyer, Adrian G

    2017-07-18

    How can a pollinator, like the honey bee, perceive the same colors on visited flowers, despite continuous and rapid changes in ambient illumination and background color? A hundred years ago, von Kries proposed an elegant solution to this problem, color constancy, which is currently incorporated in many imaging and technological applications. However, empirical evidence on how this method can operate on animal brains remains tenuous. Our mathematical modeling proposes that the observed spectral tuning of simple ocellar photoreceptors in the honey bee allows for the necessary input for an optimal color constancy solution to most natural light environments. The model is fully supported by our detailed description of a neural pathway allowing for the integration of signals originating from the ocellar photoreceptors to the information processing regions in the bee brain. These findings reveal a neural implementation to the classic color constancy problem that can be easily translated into artificial color imaging systems.

  4. A variational eigenvalue solver on a photonic quantum processor

    PubMed Central

    Peruzzo, Alberto; McClean, Jarrod; Shadbolt, Peter; Yung, Man-Hong; Zhou, Xiao-Qi; Love, Peter J.; Aspuru-Guzik, Alán; O’Brien, Jeremy L.

    2014-01-01

    Quantum computers promise to efficiently solve important problems that are intractable on a conventional computer. For quantum systems, where the physical dimension grows exponentially, finding the eigenvalues of certain operators is one such intractable problem and remains a fundamental challenge. The quantum phase estimation algorithm efficiently finds the eigenvalue of a given eigenvector but requires fully coherent evolution. Here we present an alternative approach that greatly reduces the requirements for coherent evolution and combine this method with a new approach to state preparation based on ansätze and classical optimization. We implement the algorithm by combining a highly reconfigurable photonic quantum processor with a conventional computer. We experimentally demonstrate the feasibility of this approach with an example from quantum chemistry—calculating the ground-state molecular energy for He–H+. The proposed approach drastically reduces the coherence time requirements, enhancing the potential of quantum resources available today and in the near future. PMID:25055053

  5. Zero-temperature quantum annealing bottlenecks in the spin-glass phase.

    PubMed

    Knysh, Sergey

    2016-08-05

    A promising approach to solving hard binary optimization problems is quantum adiabatic annealing in a transverse magnetic field. An instantaneous ground state-initially a symmetric superposition of all possible assignments of N qubits-is closely tracked as it becomes more and more localized near the global minimum of the classical energy. Regions where the energy gap to excited states is small (for instance at the phase transition) are the algorithm's bottlenecks. Here I show how for large problems the complexity becomes dominated by O(log N) bottlenecks inside the spin-glass phase, where the gap scales as a stretched exponential. For smaller N, only the gap at the critical point is relevant, where it scales polynomially, as long as the phase transition is second order. This phenomenon is demonstrated rigorously for the two-pattern Gaussian Hopfield model. Qualitative comparison with the Sherrington-Kirkpatrick model leads to similar conclusions.

  6. Hybrid genetic algorithm in the Hopfield network for maximum 2-satisfiability problem

    NASA Astrophysics Data System (ADS)

    Kasihmuddin, Mohd Shareduwan Mohd; Sathasivam, Saratha; Mansor, Mohd. Asyraf

    2017-08-01

    Heuristic method was designed for finding optimal solution more quickly compared to classical methods which are too complex to comprehend. In this study, a hybrid approach that utilizes Hopfield network and genetic algorithm in doing maximum 2-Satisfiability problem (MAX-2SAT) was proposed. Hopfield neural network was used to minimize logical inconsistency in interpretations of logic clauses or program. Genetic algorithm (GA) has pioneered the implementation of methods that exploit the idea of combination and reproduce a better solution. The simulation incorporated with and without genetic algorithm will be examined by using Microsoft Visual 2013 C++ Express software. The performance of both searching techniques in doing MAX-2SAT was evaluate based on global minima ratio, ratio of satisfied clause and computation time. The result obtained form the computer simulation demonstrates the effectiveness and acceleration features of genetic algorithm in doing MAX-2SAT in Hopfield network.

  7. Newsvendor problem under complete uncertainty: a case of innovative products.

    PubMed

    Gaspars-Wieloch, Helena

    2017-01-01

    The paper presents a new scenario-based decision rule for the classical version of the newsvendor problem (NP) under complete uncertainty (i.e. uncertainty with unknown probabilities). So far, NP has been analyzed under uncertainty with known probabilities or under uncertainty with partial information (probabilities known incompletely). The novel approach is designed for the sale of new, innovative products, where it is quite complicated to define probabilities or even probability-like quantities, because there are no data available for forecasting the upcoming demand via statistical analysis. The new procedure described in the contribution is based on a hybrid of Hurwicz and Bayes decision rules. It takes into account the decision maker's attitude towards risk (measured by coefficients of optimism and pessimism) and the dispersion (asymmetry, range, frequency of extremes values) of payoffs connected with particular order quantities. It does not require any information about the probability distribution.

  8. Shape optimization and CAD

    NASA Technical Reports Server (NTRS)

    Rasmussen, John

    1990-01-01

    Structural optimization has attracted the attention since the days of Galileo. Olhoff and Taylor have produced an excellent overview of the classical research within this field. However, the interest in structural optimization has increased greatly during the last decade due to the advent of reliable general numerical analysis methods and the computer power necessary to use them efficiently. This has created the possibility of developing general numerical systems for shape optimization. Several authors, eg., Esping; Braibant & Fleury; Bennet & Botkin; Botkin, Yang, and Bennet; and Stanton have published practical and successful applications of general optimization systems. Ding and Homlein have produced extensive overviews of available systems. Furthermore, a number of commercial optimization systems based on well-established finite element codes have been introduced. Systems like ANSYS, IDEAS, OASIS, and NISAOPT are widely known examples. In parallel to this development, the technology of computer aided design (CAD) has gained a large influence on the design process of mechanical engineering. The CAD technology has already lived through a rapid development driven by the drastically growing capabilities of digital computers. However, the systems of today are still considered as being only the first generation of a long row of computer integrated manufacturing (CIM) systems. These systems to come will offer an integrated environment for design, analysis, and fabrication of products of almost any character. Thus, the CAD system could be regarded as simply a database for geometrical information equipped with a number of tools with the purpose of helping the user in the design process. Among these tools are facilities for structural analysis and optimization as well as present standard CAD features like drawing, modeling, and visualization tools. The state of the art of structural optimization is that a large amount of mathematical and mechanical techniques are available for the solution of single problems. By implementing collections of the available techniques into general software systems, operational environments for structural optimization have been created. The forthcoming years must bring solutions to the problem of integrating such systems into more general design environments. The result of this work should be CAD systems for rational design in which structural optimization is one important design tool among many others.

  9. Classical problems in computational aero-acoustics

    NASA Technical Reports Server (NTRS)

    Hardin, Jay C.

    1996-01-01

    In relation to the expected problems in the development of computational aeroacoustics (CAA), the preliminary applications were to classical problems where the known analytical solutions could be used to validate the numerical results. Such comparisons were used to overcome the numerical problems inherent in these calculations. Comparisons were made between the various numerical approaches to the problems such as direct simulations, acoustic analogies and acoustic/viscous splitting techniques. The aim was to demonstrate the applicability of CAA as a tool in the same class as computational fluid dynamics. The scattering problems that occur are considered and simple sources are discussed.

  10. The Associations of Naturalistic Classic Psychedelic Use, Mystical Experience, and Creative Problem Solving.

    PubMed

    Sweat, Noah W; Bates, Larry W; Hendricks, Peter S

    2016-01-01

    Developing methods for improving creativity is of broad interest. Classic psychedelics may enhance creativity; however, the underlying mechanisms of action are unknown. This study was designed to assess whether a relationship exists between naturalistic classic psychedelic use and heightened creative problem-solving ability and if so, whether this is mediated by lifetime mystical experience. Participants (N = 68) completed a survey battery assessing lifetime mystical experience and circumstances surrounding the most memorable experience. They were then administered a functional fixedness task in which faster completion times indicate greater creative problem-solving ability. Participants reporting classic psychedelic use concurrent with mystical experience (n = 11) exhibited significantly faster times on the functional fixedness task (Cohen's d = -.87; large effect) and significantly greater lifetime mystical experience (Cohen's d = .93; large effect) than participants not reporting classic psychedelic use concurrent with mystical experience. However, lifetime mystical experience was unrelated to completion times on the functional fixedness task (standardized β = -.06), and was therefore not a significant mediator. Classic psychedelic use may increase creativity independent of its effects on mystical experience. Maximizing the likelihood of mystical experience may need not be a goal of psychedelic interventions designed to boost creativity.

  11. Real-time Collision Avoidance and Path Optimizer for Semi-autonomous UAVs.

    NASA Astrophysics Data System (ADS)

    Hawary, A. F.; Razak, N. A.

    2018-05-01

    Whilst UAV offers a potentially cheaper and more localized observation platform than current satellite or land-based approaches, it requires an advance path planner to reveal its true potential, particularly in real-time missions. Manual control by human will have limited line-of-sights and prone to errors due to careless and fatigue. A good alternative solution is to equip the UAV with semi-autonomous capabilities that able to navigate via a pre-planned route in real-time fashion. In this paper, we propose an easy-and-practical path optimizer based on the classical Travelling Salesman Problem and adopts a brute force search method to re-optimize the route in the event of collisions using range finder sensor. The former utilizes a Simple Genetic Algorithm and the latter uses Nearest Neighbour algorithm. Both algorithms are combined to optimize the route and avoid collision at once. Although many researchers proposed various path planning algorithms, we find that it is difficult to integrate on a basic UAV model and often lacks of real-time collision detection optimizer. Therefore, we explore a practical benefit from this approach using on-board Arduino and Ardupilot controllers by manually emulating the motion of an actual UAV model prior to test on the flying site. The result showed that the range finder sensor provides a real-time data to the algorithm to find a collision-free path and eventually optimized the route successfully.

  12. Gossip algorithms in quantum networks

    NASA Astrophysics Data System (ADS)

    Siomau, Michael

    2017-01-01

    Gossip algorithms is a common term to describe protocols for unreliable information dissemination in natural networks, which are not optimally designed for efficient communication between network entities. We consider application of gossip algorithms to quantum networks and show that any quantum network can be updated to optimal configuration with local operations and classical communication. This allows to speed-up - in the best case exponentially - the quantum information dissemination. Irrespective of the initial configuration of the quantum network, the update requiters at most polynomial number of local operations and classical communication.

  13. Classical Trajectories and Quantum Spectra

    NASA Technical Reports Server (NTRS)

    Mielnik, Bogdan; Reyes, Marco A.

    1996-01-01

    A classical model of the Schrodinger's wave packet is considered. The problem of finding the energy levels corresponds to a classical manipulation game. It leads to an approximate but non-perturbative method of finding the eigenvalues, exploring the bifurcations of classical trajectories. The role of squeezing turns out decisive in the generation of the discrete spectra.

  14. Is First-Order Vector Autoregressive Model Optimal for fMRI Data?

    PubMed

    Ting, Chee-Ming; Seghouane, Abd-Krim; Khalid, Muhammad Usman; Salleh, Sh-Hussain

    2015-09-01

    We consider the problem of selecting the optimal orders of vector autoregressive (VAR) models for fMRI data. Many previous studies used model order of one and ignored that it may vary considerably across data sets depending on different data dimensions, subjects, tasks, and experimental designs. In addition, the classical information criteria (IC) used (e.g., the Akaike IC (AIC)) are biased and inappropriate for the high-dimensional fMRI data typically with a small sample size. We examine the mixed results on the optimal VAR orders for fMRI, especially the validity of the order-one hypothesis, by a comprehensive evaluation using different model selection criteria over three typical data types--a resting state, an event-related design, and a block design data set--with varying time series dimensions obtained from distinct functional brain networks. We use a more balanced criterion, Kullback's IC (KIC) based on Kullback's symmetric divergence combining two directed divergences. We also consider the bias-corrected versions (AICc and KICc) to improve VAR model selection in small samples. Simulation results show better small-sample selection performance of the proposed criteria over the classical ones. Both bias-corrected ICs provide more accurate and consistent model order choices than their biased counterparts, which suffer from overfitting, with KICc performing the best. Results on real data show that orders greater than one were selected by all criteria across all data sets for the small to moderate dimensions, particularly from small, specific networks such as the resting-state default mode network and the task-related motor networks, whereas low orders close to one but not necessarily one were chosen for the large dimensions of full-brain networks.

  15. Representational Realism, Closed Theories and the Quantum to Classical Limit

    NASA Astrophysics Data System (ADS)

    de Ronde, Christian

    In this chapter, we discuss the representational realist stance as a pluralistontic approach to inter-theoretic relationships. Our stance stresses the fact that physical theories require the necessary consideration of a conceptual level of discourse which determines and configures the specific field of phenomena discussed by each particular theory. We will criticize the orthodox line of research which has grounded the analysis about QM in two (Bohrian) metaphysical presuppositions - accepted in the present as dogmas that all interpretations must follow. We will also examine how the orthodox project of "bridging the gap" between the quantum and the classical domains has constrained the possibilities of research, producing only a limited set of interpretational problems which only focus in the justification of "classical reality" and exclude the possibility of analyzing the possibilities of non-classical conceptual representations of QM. The representational realist stance introduces two new problems, namely, the superposition problem and the contextuality problem, which consider explicitly the conceptual representation of orthodox QM beyond the mere reference to mathematical structures and measurement outcomes. In the final part of the chapter, we revisit, from representational realist perspective, the quantum to classical limit and the orthodox claim that this inter-theoretic relation can be explained through the principle of decoherence.

  16. The behavior of bouncing disks and pizza tossing

    NASA Astrophysics Data System (ADS)

    Liu, K.-C.; Friend, J.; Yeo, L.

    2009-03-01

    We investigate the dynamics of a disk bouncing on a vibrating platform - a variation of the classic bouncing ball problem - that captures the physics of pizza tossing and the operation of certain standing-wave ultrasonic motors (SWUMs). The system's dynamics explains why certain tossing motions are used by dough-toss performers for different tricks: a helical trajectory is used in single tosses because it maximizes energy efficiency and the dough's airborne rotational speed, a semi-elliptical motion is used in multiple tosses because it is easier for maintaining dough rotation at the maximum rotational speed. The system's bifurcation diagram and basins of attraction also informs SWUM designers about the optimal design for high speed and minimal sensitivity to perturbation.

  17. Fully Dynamic Bin Packing

    NASA Astrophysics Data System (ADS)

    Ivković, Zoran; Lloyd, Errol L.

    Classic bin packing seeks to pack a given set of items of possibly varying sizes into a minimum number of identical sized bins. A number of approximation algorithms have been proposed for this NP-hard problem for both the on-line and off-line cases. In this chapter we discuss fully dynamic bin packing, where items may arrive (Insert) and depart (Delete) dynamically. In accordance with standard practice for fully dynamic algorithms, it is assumed that the packing may be arbitrarily rearranged to accommodate arriving and departing items. The goal is to maintain an approximately optimal solution of provably high quality in a total amount of time comparable to that used by an off-line algorithm delivering a solution of the same quality.

  18. A new nonlinear conjugate gradient coefficient under strong Wolfe-Powell line search

    NASA Astrophysics Data System (ADS)

    Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by using MATLAB software with Intel Core i7-3470 CPU processor. Numerical experimental results show that the new βk converge rapidly compared to other classical CG method.

  19. Advection modes by optimal mass transfer

    NASA Astrophysics Data System (ADS)

    Iollo, Angelo; Lombardi, Damiano

    2014-02-01

    Classical model reduction techniques approximate the solution of a physical model by a limited number of global modes. These modes are usually determined by variants of principal component analysis. Global modes can lead to reduced models that perform well in terms of stability and accuracy. However, when the physics of the model is mainly characterized by advection, the nonlocal representation of the solution by global modes essentially reduces to a Fourier expansion. In this paper we describe a method to determine a low-order representation of advection. This method is based on the solution of Monge-Kantorovich mass transfer problems. Examples of application to point vortex scattering, Korteweg-de Vries equation, and hurricane Dean advection are discussed.

  20. Classical conditioning for preserving the effects of short melatonin treatment in children with delayed sleep: a pilot study.

    PubMed

    van Maanen, Annette; Meijer, Anne Marie; Smits, Marcel G; Oort, Frans J

    2017-01-01

    Melatonin treatment is effective in treating sleep onset problems in children with delayed melatonin onset, but effects usually disappear when treatment is discontinued. In this pilot study, we investigated whether classical conditioning might help in preserving treatment effects of melatonin in children with sleep onset problems, with and without comorbid attention deficit hyperactivity disorder (ADHD) or autism. After a baseline week, 16 children (mean age: 9.92 years, 31% ADHD/autism) received melatonin treatment for 3 weeks and then gradually discontinued the treatment. Classical conditioning was applied by having children drink organic lemonade while taking melatonin and by using a dim red light lamp that was turned on when children went to bed. Results were compared with a group of 41 children (mean age: 9.43 years, 34% ADHD/autism) who received melatonin without classical conditioning. Melatonin treatment was effective in advancing dim light melatonin onset and reducing sleep onset problems, and positive effects were found on health and behavior problems. After stopping melatonin, sleep returned to baseline levels. We found that for children without comorbidity in the experimental group, sleep latency and sleep start delayed less in the stop week, which suggests an effect of classical conditioning. However, classical conditioning seems counterproductive in children with ADHD or autism. Further research is needed to establish these results and to examine other ways to preserve melatonin treatment effects, for example, by applying morning light.

  1. Completing the physical representation of quantum algorithms provides a retrocausal explanation of the speedup

    NASA Astrophysics Data System (ADS)

    Castagnoli, Giuseppe

    2017-05-01

    The usual representation of quantum algorithms, limited to the process of solving the problem, is physically incomplete as it lacks the initial measurement. We extend it to the process of setting the problem. An initial measurement selects a problem setting at random, and a unitary transformation sends it into the desired setting. The extended representation must be with respect to Bob, the problem setter, and any external observer. It cannot be with respect to Alice, the problem solver. It would tell her the problem setting and thus the solution of the problem implicit in it. In the representation to Alice, the projection of the quantum state due to the initial measurement should be postponed until the end of the quantum algorithm. In either representation, there is a unitary transformation between the initial and final measurement outcomes. As a consequence, the final measurement of any ℛ-th part of the solution could select back in time a corresponding part of the random outcome of the initial measurement; the associated projection of the quantum state should be advanced by the inverse of that unitary transformation. This, in the representation to Alice, would tell her, before she begins her problem solving action, that part of the solution. The quantum algorithm should be seen as a sum over classical histories in each of which Alice knows in advance one of the possible ℛ-th parts of the solution and performs the oracle queries still needed to find it - this for the value of ℛ that explains the algorithm's speedup. We have a relation between retrocausality ℛ and the number of oracle queries needed to solve an oracle problem quantumly. All the oracle problems examined can be solved with any value of ℛ up to an upper bound attained by the optimal quantum algorithm. This bound is always in the vicinity of 1/2 . Moreover, ℛ =1/2 always provides the order of magnitude of the number of queries needed to solve the problem in an optimal quantum way. If this were true for any oracle problem, as plausible, it would solve the quantum query complexity problem.

  2. Optimizing Robinson Operator with Ant Colony Optimization As a Digital Image Edge Detection Method

    NASA Astrophysics Data System (ADS)

    Yanti Nasution, Tarida; Zarlis, Muhammad; K. M Nasution, Mahyuddin

    2017-12-01

    Edge detection serves to identify the boundaries of an object against a background of mutual overlap. One of the classic method for edge detection is operator Robinson. Operator Robinson produces a thin, not assertive and grey line edge. To overcome these deficiencies, the proposed improvements to edge detection method with the approach graph with Ant Colony Optimization algorithm. The repairs may be performed are thicken the edge and connect the edges cut off. Edge detection research aims to do optimization of operator Robinson with Ant Colony Optimization then compare the output and generated the inferred extent of Ant Colony Optimization can improve result of edge detection that has not been optimized and improve the accuracy of the results of Robinson edge detection. The parameters used in performance measurement of edge detection are morphology of the resulting edge line, MSE and PSNR. The result showed that Robinson and Ant Colony Optimization method produces images with a more assertive and thick edge. Ant Colony Optimization method is able to be used as a method for optimizing operator Robinson by improving the image result of Robinson detection average 16.77 % than classic Robinson result.

  3. Structural optimization of dental restorations using the principle of adaptive growth.

    PubMed

    Couegnat, Guillaume; Fok, Siu L; Cooper, Jonathan E; Qualtrough, Alison J E

    2006-01-01

    In a restored tooth, the stresses that occur at the tooth-restoration interface during loading could become large enough to fracture the tooth and/or restoration and it has been estimated that 92% of fractured teeth have been previously restored. The tooth preparation process for a dental restoration is a classical optimization problem: tooth reduction must be minimized to preserve tooth tissue whilst stress levels must be kept low to avoid fracture of the restored unit. The objective of the present study was to derive alternative optimized designs for a second upper premolar cavity preparation by means of structural shape optimization based on the finite element method and biological adaptive growth. Three models of cavity preparations were investigated: an inlay design for preparation of a premolar tooth, an undercut cavity design and an onlay preparation. Three restorative materials and several tooth/restoration contact conditions were utilized to replicate the in vitro situation as closely as possible. The optimization process was run for each cavity geometry. Mathematical shape optimization based on biological adaptive growth process was successfully applied to tooth preparations for dental restorations. Significant reduction in stress levels at the tooth-restoration interface where bonding is imperfect was achieved using optimized cavity or restoration shapes. In the best case, the maximum stress value was reduced by more than 50%. Shape optimization techniques can provide an efficient and effective means of reducing the stresses in restored teeth and hence has the potential of prolonging their service lives. The technique can easily be adopted for optimizing other dental restorations.

  4. Brachistochrones with Loose Ends

    ERIC Educational Resources Information Center

    Mertens, Stephan; Mingramm, Sebastian

    2008-01-01

    The classical problem of the brachistochrone asks for the curve down which a body sliding from rest and accelerated by gravity will slip (without friction) from one point to another in least time. In undergraduate courses on classical mechanics, the solution of this problem is the primary example of the power of variational calculus. Here, we…

  5. A Concise Introduction to Colombeau Generalized Functions and Their Applications in Classical Electrodynamics

    ERIC Educational Resources Information Center

    Gsponer, Andre

    2009-01-01

    The objective of this introduction to Colombeau algebras of generalized functions (in which distributions can be freely multiplied) is to explain in elementary terms the essential concepts necessary for their application to basic nonlinear problems in classical physics. Examples are given in hydrodynamics and electrodynamics. The problem of the…

  6. An Explanatory, Transformation Geometry Proof of a Classic Treasure-Hunt Problem and Its Generalization

    ERIC Educational Resources Information Center

    de Villiers, Michael

    2017-01-01

    This paper discusses an interesting, classic problem that provides a nice classroom investigation for dynamic geometry, and which can easily be explained (proved) with transformation geometry. The deductive explanation (proof) provides insight into why it is true, leading to an immediate generalization, thus illustrating the discovery function of…

  7. Bertrand's theorem and virial theorem in fractional classical mechanics

    NASA Astrophysics Data System (ADS)

    Yu, Rui-Yan; Wang, Towe

    2017-09-01

    Fractional classical mechanics is the classical counterpart of fractional quantum mechanics. The central force problem in this theory is investigated. Bertrand's theorem is generalized, and virial theorem is revisited, both in three spatial dimensions. In order to produce stable, closed, non-circular orbits, the inverse-square law and the Hooke's law should be modified in fractional classical mechanics.

  8. Distributing Earthquakes Among California's Faults: A Binary Integer Programming Approach

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2016-12-01

    Statement of the problem is simple: given regional seismicity specified by a Gutenber-Richter (G-R) relation, how are earthquakes distributed to match observed fault-slip rates? The objective is to determine the magnitude-frequency relation on individual faults. The California statewide G-R b-value and a-value are estimated from historical seismicity, with the a-value accounting for off-fault seismicity. UCERF3 consensus slip rates are used, based on geologic and geodetic data and include estimates of coupling coefficients. The binary integer programming (BIP) problem is set up such that each earthquake from a synthetic catalog spanning millennia can occur at any location along any fault. The decision vector, therefore, consists of binary variables, with values equal to one indicating the location of each earthquake that results in an optimal match of slip rates, in an L1-norm sense. Rupture area and slip associated with each earthquake are determined from a magnitude-area scaling relation. Uncertainty bounds on the UCERF3 slip rates provide explicit minimum and maximum constraints to the BIP model, with the former more important to feasibility of the problem. There is a maximum magnitude limit associated with each fault, based on fault length, providing an implicit constraint. Solution of integer programming problems with a large number of variables (>105 in this study) has been possible only since the late 1990s. In addition to the classic branch-and-bound technique used for these problems, several other algorithms have been recently developed, including pre-solving, sifting, cutting planes, heuristics, and parallelization. An optimal solution is obtained using a state-of-the-art BIP solver for M≥6 earthquakes and California's faults with slip-rates > 1 mm/yr. Preliminary results indicate a surprising diversity of on-fault magnitude-frequency relations throughout the state.

  9. Sleep Does Not Promote Solving Classical Insight Problems and Magic Tricks

    PubMed Central

    Schönauer, Monika; Brodt, Svenja; Pöhlchen, Dorothee; Breßmer, Anja; Danek, Amory H.; Gais, Steffen

    2018-01-01

    During creative problem solving, initial solution attempts often fail because of self-imposed constraints that prevent us from thinking out of the box. In order to solve a problem successfully, the problem representation has to be restructured by combining elements of available knowledge in novel and creative ways. It has been suggested that sleep supports the reorganization of memory representations, ultimately aiding problem solving. In this study, we systematically tested the effect of sleep and time on problem solving, using classical insight tasks and magic tricks. Solving these tasks explicitly requires a restructuring of the problem representation and may be accompanied by a subjective feeling of insight. In two sessions, 77 participants had to solve classical insight problems and magic tricks. The two sessions either occurred consecutively or were spaced 3 h apart, with the time in between spent either sleeping or awake. We found that sleep affected neither general solution rates nor the number of solutions accompanied by sudden subjective insight. Our study thus adds to accumulating evidence that sleep does not provide an environment that facilitates the qualitative restructuring of memory representations and enables problem solving. PMID:29535620

  10. Research on schedulers for astronomical observatories

    NASA Astrophysics Data System (ADS)

    Colome, Josep; Colomer, Pau; Guàrdia, Josep; Ribas, Ignasi; Campreciós, Jordi; Coiffard, Thierry; Gesa, Lluis; Martínez, Francesc; Rodler, Florian

    2012-09-01

    The main task of a scheduler applied to astronomical observatories is the time optimization of the facility and the maximization of the scientific return. Scheduling of astronomical observations is an example of the classical task allocation problem known as the job-shop problem (JSP), where N ideal tasks are assigned to M identical resources, while minimizing the total execution time. A problem of higher complexity, called the Flexible-JSP (FJSP), arises when the tasks can be executed by different resources, i.e. by different telescopes, and it focuses on determining a routing policy (i.e., which machine to assign for each operation) other than the traditional scheduling decisions (i.e., to determine the starting time of each operation). In most cases there is no single best approach to solve the planning system and, therefore, various mathematical algorithms (Genetic Algorithms, Ant Colony Optimization algorithms, Multi-Objective Evolutionary algorithms, etc.) are usually considered to adapt the application to the system configuration and task execution constraints. The scheduling time-cycle is also an important ingredient to determine the best approach. A shortterm scheduler, for instance, has to find a good solution with the minimum computation time, providing the system with the capability to adapt the selected task to varying execution constraints (i.e., environment conditions). We present in this contribution an analysis of the task allocation problem and the solutions currently in use at different astronomical facilities. We also describe the schedulers for three different projects (CTA, CARMENES and TJO) where the conclusions of this analysis are applied to develop a suitable routine.

  11. Brain-computer interaction research at the Computer Vision and Multimedia Laboratory, University of Geneva.

    PubMed

    Pun, Thierry; Alecu, Teodor Iulian; Chanel, Guillaume; Kronegg, Julien; Voloshynovskiy, Sviatoslav

    2006-06-01

    This paper describes the work being conducted in the domain of brain-computer interaction (BCI) at the Multimodal Interaction Group, Computer Vision and Multimedia Laboratory, University of Geneva, Geneva, Switzerland. The application focus of this work is on multimodal interaction rather than on rehabilitation, that is how to augment classical interaction by means of physiological measurements. Three main research topics are addressed. The first one concerns the more general problem of brain source activity recognition from EEGs. In contrast with classical deterministic approaches, we studied iterative robust stochastic based reconstruction procedures modeling source and noise statistics, to overcome known limitations of current techniques. We also developed procedures for optimal electroencephalogram (EEG) sensor system design in terms of placement and number of electrodes. The second topic is the study of BCI protocols and performance from an information-theoretic point of view. Various information rate measurements have been compared for assessing BCI abilities. The third research topic concerns the use of EEG and other physiological signals for assessing a user's emotional status.

  12. Mining chemical reactions using neighborhood behavior and condensed graphs of reactions approaches.

    PubMed

    de Luca, Aurélie; Horvath, Dragos; Marcou, Gilles; Solov'ev, Vitaly; Varnek, Alexandre

    2012-09-24

    This work addresses the problem of similarity search and classification of chemical reactions using Neighborhood Behavior (NB) and Condensed Graphs of Reaction (CGR) approaches. The CGR formalism represents chemical reactions as a classical molecular graph with dynamic bonds, enabling descriptor calculations on this graph. Different types of the ISIDA fragment descriptors generated for CGRs in combination with two metrics--Tanimoto and Euclidean--were considered as chemical spaces, to serve for reaction dissimilarity scoring. The NB method has been used to select an optimal combination of descriptors which distinguish different types of chemical reactions in a database containing 8544 reactions of 9 classes. Relevance of NB analysis has been validated in generic (multiclass) similarity search and in clustering with Self-Organizing Maps (SOM). NB-compliant sets of descriptors were shown to display enhanced mapping propensities, allowing the construction of better Self-Organizing Maps and similarity searches (NB and classical similarity search criteria--AUC ROC--correlate at a level of 0.7). The analysis of the SOM clusters proved chemically meaningful CGR substructures representing specific reaction signatures.

  13. Supernovae in Binary Systems: An Application of Classical Mechanics.

    ERIC Educational Resources Information Center

    Mitalas, R.

    1980-01-01

    Presents the supernova explosion in a binary system as an application of classical mechanics. This presentation is intended to illustrate the power of the equivalent one-body problem and provide undergraduate students with a variety of insights into elementary classical mechanics. (HM)

  14. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.

  15. Linear Quantum Systems: Non-Classical States and Robust Stability

    DTIC Science & Technology

    2016-06-29

    quantum linear systems subject to non-classical quantum fields. The major outcomes of this project are (i) derivation of quantum filtering equations for...derivation of quantum filtering equations for systems non-classical input states including single photon states, (ii) determination of how linear...history going back some 50 years, to the birth of modern control theory with Kalman’s foundational work on filtering and LQG optimal control

  16. Performance Analysis and Optimization of the Winnow Secret Key Reconciliation Protocol

    DTIC Science & Technology

    2011-06-01

    use in a quantum key system can be defined in two ways :  The number of messages passed between Alice and Bob  The...classical and quantum environment. Post- quantum cryptography , which is generally used to describe classical quantum -resilient protocols, includes...composed of a one- way quantum channel and a two - way classical channel. Owing to the physics of the channel, the quantum channel is subject to

  17. Efficient quantum repeater with respect to both entanglement-concentration rate and complexity of local operations and classical communication

    NASA Astrophysics Data System (ADS)

    Su, Zhaofeng; Guan, Ji; Li, Lvzhou

    2018-01-01

    Quantum entanglement is an indispensable resource for many significant quantum information processing tasks. However, in practice, it is difficult to distribute quantum entanglement over a long distance, due to the absorption and noise in quantum channels. A solution to this challenge is a quantum repeater, which can extend the distance of entanglement distribution. In this scheme, the time consumption of classical communication and local operations takes an important place with respect to time efficiency. Motivated by this observation, we consider a basic quantum repeater scheme that focuses on not only the optimal rate of entanglement concentration but also the complexity of local operations and classical communication. First, we consider the case where two different two-qubit pure states are initially distributed in the scenario. We construct a protocol with the optimal entanglement-concentration rate and less consumption of local operations and classical communication. We also find a criterion for the projective measurements to achieve the optimal probability of creating a maximally entangled state between the two ends. Second, we consider the case in which two general pure states are prepared and general measurements are allowed. We get an upper bound on the probability for a successful measurement operation to produce a maximally entangled state without any further local operations.

  18. Teaching Semantic Tableaux Method for Propositional Classical Logic with a CAS

    ERIC Educational Resources Information Center

    Aguilera-Venegas, Gabriel; Galán-García, José Luis; Galán-García, María Ángeles; Rodríguez-Cielos, Pedro

    2015-01-01

    Automated theorem proving (ATP) for Propositional Classical Logic is an algorithm to check the validity of a formula. It is a very well-known problem which is decidable but co-NP-complete. There are many algorithms for this problem. In this paper, an educationally oriented implementation of Semantic Tableaux method is described. The program has…

  19. Continuous-Time Public Good Contribution Under Uncertainty: A Stochastic Control Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrari, Giorgio, E-mail: giorgio.ferrari@uni-bielefeld.de; Riedel, Frank, E-mail: frank.riedel@uni-bielefeld.de; Steg, Jan-Henrik, E-mail: jsteg@uni-bielefeld.de

    In this paper we study continuous-time stochastic control problems with both monotone and classical controls motivated by the so-called public good contribution problem. That is the problem of n economic agents aiming to maximize their expected utility allocating initial wealth over a given time period between private consumption and irreversible contributions to increase the level of some public good. We investigate the corresponding social planner problem and the case of strategic interaction between the agents, i.e. the public good contribution game. We show existence and uniqueness of the social planner’s optimal policy, we characterize it by necessary and sufficient stochasticmore » Kuhn–Tucker conditions and we provide its expression in terms of the unique optional solution of a stochastic backward equation. Similar stochastic first order conditions prove to be very useful for studying any Nash equilibria of the public good contribution game. In the symmetric case they allow us to prove (qualitative) uniqueness of the Nash equilibrium, which we again construct as the unique optional solution of a stochastic backward equation. We finally also provide a detailed analysis of the so-called free rider effect.« less

  20. A New Computational Technique for the Generation of Optimised Aircraft Trajectories

    NASA Astrophysics Data System (ADS)

    Chircop, Kenneth; Gardi, Alessandro; Zammit-Mangion, David; Sabatini, Roberto

    2017-12-01

    A new computational technique based on Pseudospectral Discretisation (PSD) and adaptive bisection ɛ-constraint methods is proposed to solve multi-objective aircraft trajectory optimisation problems formulated as nonlinear optimal control problems. This technique is applicable to a variety of next-generation avionics and Air Traffic Management (ATM) Decision Support Systems (DSS) for strategic and tactical replanning operations. These include the future Flight Management Systems (FMS) and the 4-Dimensional Trajectory (4DT) planning and intent negotiation/validation tools envisaged by SESAR and NextGen for a global implementation. In particular, after describing the PSD method, the adaptive bisection ɛ-constraint method is presented to allow an efficient solution of problems in which two or multiple performance indices are to be minimized simultaneously. Initial simulation case studies were performed adopting suitable aircraft dynamics models and addressing a classical vertical trajectory optimisation problem with two objectives simultaneously. Subsequently, a more advanced 4DT simulation case study is presented with a focus on representative ATM optimisation objectives in the Terminal Manoeuvring Area (TMA). The simulation results are analysed in-depth and corroborated by flight performance analysis, supporting the validity of the proposed computational techniques.

  1. Estimating the Inertia Matrix of a Spacecraft

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Keim, Jason; Shields, Joel

    2007-01-01

    A paper presents a method of utilizing some flight data, aboard a spacecraft that includes reaction wheels for attitude control, to estimate the inertia matrix of the spacecraft. The required data are digitized samples of (1) the spacecraft attitude in an inertial reference frame as measured, for example, by use of a star tracker and (2) speeds of rotation of the reaction wheels, the moments of inertia of which are deemed to be known. Starting from the classical equations for conservation of angular momentum of a rigid body, the inertia-matrix-estimation problem is formulated as a constrained least-squares minimization problem with explicit bounds on the inertia matrix incorporated as linear matrix inequalities. The explicit bounds reflect physical bounds on the inertia matrix and reduce the volume of data that must be processed to obtain a solution. The resulting minimization problem is a semidefinite optimization problem that can be solved efficiently, with guaranteed convergence to the global optimum, by use of readily available algorithms. In a test case involving a model attitude platform rotating on an air bearing, it is shown that, relative to a prior method, the present method produces better estimates from few data.

  2. Stochastic gradient ascent outperforms gamers in the Quantum Moves game

    NASA Astrophysics Data System (ADS)

    Sels, Dries

    2018-04-01

    In a recent work on quantum state preparation, Sørensen and co-workers [Nature (London) 532, 210 (2016), 10.1038/nature17620] explore the possibility of using video games to help design quantum control protocols. The authors present a game called "Quantum Moves" (https://www.scienceathome.org/games/quantum-moves/) in which gamers have to move an atom from A to B by means of optical tweezers. They report that, "players succeed where purely numerical optimization fails." Moreover, by harnessing the player strategies, they can "outperform the most prominent established numerical methods." The aim of this Rapid Communication is to analyze the problem in detail and show that those claims are untenable. In fact, without any prior knowledge and starting from a random initial seed, a simple stochastic local optimization method finds near-optimal solutions which outperform all players. Counterdiabatic driving can even be used to generate protocols without resorting to numeric optimization. The analysis results in an accurate analytic estimate of the quantum speed limit which, apart from zero-point motion, is shown to be entirely classical in nature. The latter might explain why gamers are reasonably good at the game. A simple modification of the BringHomeWater challenge is proposed to test this hypothesis.

  3. Waste lipids to energy: how to optimize methane production from long‐chain fatty acids (LCFA)

    PubMed Central

    Alves, M. Madalena; Pereira, M. Alcina; Sousa, Diana Z.; Cavaleiro, Ana J.; Picavet, Merijn; Smidt, Hauke; Stams, Alfons J. M.

    2009-01-01

    Summary The position of high‐rate anaerobic technology (HR‐AnWT) in the wastewater treatment and bioenergy market can be enhanced if the range of suitable substrates is expanded. Analyzing existing technologies, applications and problems, it is clear that, until now, wastewaters with high lipids content are not effectively treated by HR‐AnWT. Nevertheless, waste lipids are ideal potential substrates for biogas production, since theoretically more methane can be produced, when compared with proteins or carbohydrates. In this minireview, the classical problems of lipids methanization in anaerobic processes are discussed and new concepts to enhance lipids degradation are presented. Reactors operation, feeding strategies and prospects of technological developments for wastewater treatment are discussed. Long‐chain fatty acids (LCFA) degradation is accomplished by syntrophic communities of anaerobic bacteria and methanogenic archaea. For optimal performance these syntrophic communities need to be clustered in compact aggregates, which is often difficult to achieve with wastewaters that contain fats and lipids. Driving the methane production from lipids/LCFA at industrial scale without risk of overloading and inhibition is still a challenge that has the potential for filling a gap in the existing processes and technologies for biological methane production associated to waste and wastewater treatment. PMID:21255287

  4. MANGO: a new approach to multiple sequence alignment.

    PubMed

    Zhang, Zefeng; Lin, Hao; Li, Ming

    2007-01-01

    Multiple sequence alignment is a classical and challenging task for biological sequence analysis. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state of the art multiple sequence alignment programs suffer from the 'once a gap, always a gap' phenomenon. Is there a radically new way to do multiple sequence alignment? This paper introduces a novel and orthogonal multiple sequence alignment method, using multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds are provably significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks showing that MANGO compares favorably, in both accuracy and speed, against state-of-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, Prob-ConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0 and Kalign 2.0.

  5. Energy-landscape paving for prediction of face-centered-cubic hydrophobic-hydrophilic lattice model proteins

    NASA Astrophysics Data System (ADS)

    Liu, Jingfa; Song, Beibei; Liu, Zhaoxia; Huang, Weibo; Sun, Yuanyuan; Liu, Wenjie

    2013-11-01

    Protein structure prediction (PSP) is a classical NP-hard problem in computational biology. The energy-landscape paving (ELP) method is a class of heuristic global optimization algorithm, and has been successfully applied to solving many optimization problems with complex energy landscapes in the continuous space. By putting forward a new update mechanism of the histogram function in ELP and incorporating the generation of initial conformation based on the greedy strategy and the neighborhood search strategy based on pull moves into ELP, an improved energy-landscape paving (ELP+) method is put forward. Twelve general benchmark instances are first tested on both two-dimensional and three-dimensional (3D) face-centered-cubic (fcc) hydrophobic-hydrophilic (HP) lattice models. The lowest energies by ELP+ are as good as or better than those of other methods in the literature for all instances. Then, five sets of larger-scale instances, denoted by S, R, F90, F180, and CASP target instances on the 3D FCC HP lattice model are tested. The proposed algorithm finds lower energies than those by the five other methods in literature. Not unexpectedly, this is particularly pronounced for the longer sequences considered. Computational results show that ELP+ is an effective method for PSP on the fcc HP lattice model.

  6. A general heuristic for genome rearrangement problems.

    PubMed

    Dias, Ulisses; Galvão, Gustavo Rodrigues; Lintzmayer, Carla Négri; Dias, Zanoni

    2014-06-01

    In this paper, we present a general heuristic for several problems in the genome rearrangement field. Our heuristic does not solve any problem directly, it is rather used to improve the solutions provided by any non-optimal algorithm that solve them. Therefore, we have implemented several algorithms described in the literature and several algorithms developed by ourselves. As a whole, we implemented 23 algorithms for 9 well known problems in the genome rearrangement field. A total of 13 algorithms were implemented for problems that use the notions of prefix and suffix operations. In addition, we worked on 5 algorithms for the classic problem of sorting by transposition and we conclude the experiments by presenting results for 3 approximation algorithms for the sorting by reversals and transpositions problem and 2 approximation algorithms for the sorting by reversals problem. Another algorithm with better approximation ratio can be found for the last genome rearrangement problem, but it is purely theoretical with no practical implementation. The algorithms we implemented in addition to our heuristic lead to the best practical results in each case. In particular, we were able to improve results on the sorting by transpositions problem, which is a very special case because many efforts have been made to generate algorithms with good results in practice and some of these algorithms provide results that equal the optimum solutions in many cases. Our source codes and benchmarks are freely available upon request from the authors so that it will be easier to compare new approaches against our results.

  7. Initialization of Formation Flying Using Primer Vector Theory

    NASA Technical Reports Server (NTRS)

    Mailhe, Laurie; Schiff, Conrad; Folta, David

    2002-01-01

    In this paper, we extend primer vector analysis to formation flying. Optimization of the classical rendezvous or free-time transfer problem between two orbits using primer vector theory has been extensively studied for one spacecraft. However, an increasing number of missions are now considering flying a set of spacecraft in close formation. Missions such as the Magnetospheric MultiScale (MMS) and Leonardo-BRDF (Bidirectional Reflectance Distribution Function) need to determine strategies to transfer each spacecraft from the common launch orbit to their respective operational orbit. In addition, all the spacecraft must synchronize their states so that they achieve the same desired formation geometry over each orbit. This periodicity requirement imposes constraints on the boundary conditions that can be used for the primer vector algorithm. In this work we explore the impact of the periodicity requirement in optimizing each spacecraft transfer trajectory using primer vector theory. We first present our adaptation of primer vector theory to formation flying. Using this method, we then compute the AV budget for each spacecraft subject to different formation endpoint constraints.

  8. Solving Quantum Ground-State Problems with Nuclear Magnetic Resonance

    PubMed Central

    Li, Zhaokai; Yung, Man-Hong; Chen, Hongwei; Lu, Dawei; Whitfield, James D.; Peng, Xinhua; Aspuru-Guzik, Alán; Du, Jiangfeng

    2011-01-01

    Quantum ground-state problems are computationally hard problems for general many-body Hamiltonians; there is no classical or quantum algorithm known to be able to solve them efficiently. Nevertheless, if a trial wavefunction approximating the ground state is available, as often happens for many problems in physics and chemistry, a quantum computer could employ this trial wavefunction to project the ground state by means of the phase estimation algorithm (PEA). We performed an experimental realization of this idea by implementing a variational-wavefunction approach to solve the ground-state problem of the Heisenberg spin model with an NMR quantum simulator. Our iterative phase estimation procedure yields a high accuracy for the eigenenergies (to the 10−5 decimal digit). The ground-state fidelity was distilled to be more than 80%, and the singlet-to-triplet switching near the critical field is reliably captured. This result shows that quantum simulators can better leverage classical trial wave functions than classical computers PMID:22355607

  9. Noisy covariance matrices and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Pafka, S.; Kondor, I.

    2002-05-01

    According to recent findings [#!bouchaud!#,#!stanley!#], empirical covariance matrices deduced from financial return series contain such a high amount of noise that, apart from a few large eigenvalues and the corresponding eigenvectors, their structure can essentially be regarded as random. In [#!bouchaud!#], e.g., it is reported that about 94% of the spectrum of these matrices can be fitted by that of a random matrix drawn from an appropriately chosen ensemble. In view of the fundamental role of covariance matrices in the theory of portfolio optimization as well as in industry-wide risk management practices, we analyze the possible implications of this effect. Simulation experiments with matrices having a structure such as described in [#!bouchaud!#,#!stanley!#] lead us to the conclusion that in the context of the classical portfolio problem (minimizing the portfolio variance under linear constraints) noise has relatively little effect. To leading order the solutions are determined by the stable, large eigenvalues, and the displacement of the solution (measured in variance) due to noise is rather small: depending on the size of the portfolio and on the length of the time series, it is of the order of 5 to 15%. The picture is completely different, however, if we attempt to minimize the variance under non-linear constraints, like those that arise e.g. in the problem of margin accounts or in international capital adequacy regulation. In these problems the presence of noise leads to a serious instability and a high degree of degeneracy of the solutions.

  10. Team formation and breakup in multiagent systems

    NASA Astrophysics Data System (ADS)

    Rao, Venkatesh Guru

    The goal of this dissertation is to pose and solve problems involving team formation and breakup in two specific multiagent domains: formation travel and space-based interferometric observatories. The methodology employed comprises elements drawn from control theory, scheduling theory and artificial intelligence (AI). The original contribution of the work comprises three elements. The first contribution, the partitioned state-space approach is a technique for formulating and solving co-ordinated motion problem using calculus of variations techniques. The approach is applied to obtain optimal two-agent formation travel trajectories on graphs. The second contribution is the class of MixTeam algorithms, a class of team dispatchers that extends classical dispatching by accommodating team formation and breakup and exploration/exploitation learning. The algorithms are applied to observation scheduling and constellation geometry design for interferometric space telescopes. The use of feedback control for team scheduling is also demonstrated with these algorithms. The third contribution is the analysis of the optimality properties of greedy, or myopic, decision-making for a simple class of team dispatching problems. This analysis represents a first step towards the complete analysis of complex team schedulers such as the MixTeam algorithms. The contributions represent an extension to the literature on team dynamics in control theory. The broad conclusions that emerge from this research are that greedy or myopic decision-making strategies for teams perform well when specific parameters in the domain are weakly affected by an agent's actions, and that intelligent systems require a closer integration of domain knowledge in decision-making functions.

  11. Multiagent optimization system for solving the traveling salesman problem (TSP).

    PubMed

    Xie, Xiao-Feng; Liu, Jiming

    2009-04-01

    The multiagent optimization system (MAOS) is a nature-inspired method, which supports cooperative search by the self-organization of a group of compact agents situated in an environment with certain sharing public knowledge. Moreover, each agent in MAOS is an autonomous entity with personal declarative memory and behavioral components. In this paper, MAOS is refined for solving the traveling salesman problem (TSP), which is a classic hard computational problem. Based on a simplified MAOS version, in which each agent manipulates on extremely limited declarative knowledge, some simple and efficient components for solving TSP, including two improving heuristics based on a generalized edge assembly recombination, are implemented. Compared with metaheuristics in adaptive memory programming, MAOS is particularly suitable for supporting cooperative search. The experimental results on two TSP benchmark data sets show that MAOS is competitive as compared with some state-of-the-art algorithms, including the Lin-Kernighan-Helsgaun, IBGLK, PHGA, etc., although MAOS does not use any explicit local search during the runtime. The contributions of MAOS components are investigated. It indicates that certain clues can be positive for making suitable selections before time-consuming computation. More importantly, it shows that the cooperative search of agents can achieve an overall good performance with a macro rule in the switch mode, which deploys certain alternate search rules with the offline performance in negative correlations. Using simple alternate rules may prevent the high difficulty of seeking an omnipotent rule that is efficient for a large data set.

  12. Optimal visual-haptic integration with articulated tools.

    PubMed

    Takahashi, Chie; Watt, Simon J

    2017-05-01

    When we feel and see an object, the nervous system integrates visual and haptic information optimally, exploiting the redundancy in multiple signals to estimate properties more precisely than is possible from either signal alone. We examined whether optimal integration is similarly achieved when using articulated tools. Such tools (tongs, pliers, etc) are a defining characteristic of human hand function, but complicate the classical sensory 'correspondence problem' underlying multisensory integration. Optimal integration requires establishing the relationship between signals acquired by different sensors (hand and eye) and, therefore, in fundamentally unrelated units. The system must also determine when signals refer to the same property of the world-seeing and feeling the same thing-and only integrate those that do. This could be achieved by comparing the pattern of current visual and haptic input to known statistics of their normal relationship. Articulated tools disrupt this relationship, however, by altering the geometrical relationship between object properties and hand posture (the haptic signal). We examined whether different tool configurations are taken into account in visual-haptic integration. We indexed integration by measuring the precision of size estimates, and compared our results to optimal predictions from a maximum-likelihood integrator. Integration was near optimal, independent of tool configuration/hand posture, provided that visual and haptic signals referred to the same object in the world. Thus, sensory correspondence was determined correctly (trial-by-trial), taking tool configuration into account. This reveals highly flexible multisensory integration underlying tool use, consistent with the brain constructing internal models of tools' properties.

  13. German Children's Classics: Heirs and Pretenders to an Eclectic Heritage

    ERIC Educational Resources Information Center

    Doderer, Klaus

    1973-01-01

    There are no classic children's books, if by classics we mean books that will last forever. Instead, it is a matter of constant reevaluation. At most, we have older works that are still valuable today because they touch upon the human and artistic problems of our time. (Author/SJ)

  14. Constrained variational calculus for higher order classical field theories

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; de León, Manuel; Martín de Diego, David

    2010-11-01

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  15. Quantum speedup of the traveling-salesman problem for bounded-degree graphs

    NASA Astrophysics Data System (ADS)

    Moylett, Dominic J.; Linden, Noah; Montanaro, Ashley

    2017-03-01

    The traveling-salesman problem is one of the most famous problems in graph theory. However, little is currently known about the extent to which quantum computers could speed up algorithms for the problem. In this paper, we prove a quadratic quantum speedup when the degree of each vertex is at most 3 by applying a quantum backtracking algorithm to a classical algorithm by Xiao and Nagamochi. We then use similar techniques to accelerate a classical algorithm for when the degree of each vertex is at most 4, before speeding up higher-degree graphs via reductions to these instances.

  16. The theory of variational hybrid quantum-classical algorithms

    NASA Astrophysics Data System (ADS)

    McClean, Jarrod R.; Romero, Jonathan; Babbush, Ryan; Aspuru-Guzik, Alán

    2016-02-01

    Many quantum algorithms have daunting resource requirements when compared to what is available today. To address this discrepancy, a quantum-classical hybrid optimization scheme known as ‘the quantum variational eigensolver’ was developed (Peruzzo et al 2014 Nat. Commun. 5 4213) with the philosophy that even minimal quantum resources could be made useful when used in conjunction with classical routines. In this work we extend the general theory of this algorithm and suggest algorithmic improvements for practical implementations. Specifically, we develop a variational adiabatic ansatz and explore unitary coupled cluster where we establish a connection from second order unitary coupled cluster to universal gate sets through a relaxation of exponential operator splitting. We introduce the concept of quantum variational error suppression that allows some errors to be suppressed naturally in this algorithm on a pre-threshold quantum device. Additionally, we analyze truncation and correlated sampling in Hamiltonian averaging as ways to reduce the cost of this procedure. Finally, we show how the use of modern derivative free optimization techniques can offer dramatic computational savings of up to three orders of magnitude over previously used optimization techniques.

  17. Towards Optimal Connectivity on Multi-layered Networks.

    PubMed

    Chen, Chen; He, Jingrui; Bliss, Nadya; Tong, Hanghang

    2017-10-01

    Networks are prevalent in many high impact domains. Moreover, cross-domain interactions are frequently observed in many applications, which naturally form the dependencies between different networks. Such kind of highly coupled network systems are referred to as multi-layered networks , and have been used to characterize various complex systems, including critical infrastructure networks, cyber-physical systems, collaboration platforms, biological systems and many more. Different from single-layered networks where the functionality of their nodes is mainly affected by within-layer connections, multi-layered networks are more vulnerable to disturbance as the impact can be amplified through cross-layer dependencies, leading to the cascade failure to the entire system. To manipulate the connectivity in multi-layered networks, some recent methods have been proposed based on two-layered networks with specific types of connectivity measures. In this paper, we address the above challenges in multiple dimensions. First, we propose a family of connectivity measures (SUBLINE) that unifies a wide range of classic network connectivity measures. Third, we reveal that the connectivity measures in SUBLINE family enjoy diminishing returns property , which guarantees a near-optimal solution with linear complexity for the connectivity optimization problem. Finally, we evaluate our proposed algorithm on real data sets to demonstrate its effectiveness and efficiency.

  18. Variational Gaussian approximation for Poisson data

    NASA Astrophysics Data System (ADS)

    Arridge, Simon R.; Ito, Kazufumi; Jin, Bangti; Zhang, Chen

    2018-02-01

    The Poisson model is frequently employed to describe count data, but in a Bayesian context it leads to an analytically intractable posterior probability distribution. In this work, we analyze a variational Gaussian approximation to the posterior distribution arising from the Poisson model with a Gaussian prior. This is achieved by seeking an optimal Gaussian distribution minimizing the Kullback-Leibler divergence from the posterior distribution to the approximation, or equivalently maximizing the lower bound for the model evidence. We derive an explicit expression for the lower bound, and show the existence and uniqueness of the optimal Gaussian approximation. The lower bound functional can be viewed as a variant of classical Tikhonov regularization that penalizes also the covariance. Then we develop an efficient alternating direction maximization algorithm for solving the optimization problem, and analyze its convergence. We discuss strategies for reducing the computational complexity via low rank structure of the forward operator and the sparsity of the covariance. Further, as an application of the lower bound, we discuss hierarchical Bayesian modeling for selecting the hyperparameter in the prior distribution, and propose a monotonically convergent algorithm for determining the hyperparameter. We present extensive numerical experiments to illustrate the Gaussian approximation and the algorithms.

  19. Optimal control of hybrid qubits: Implementing the quantum permutation algorithm

    NASA Astrophysics Data System (ADS)

    Rivera-Ruiz, C. M.; de Lima, E. F.; Fanchini, F. F.; Lopez-Richard, V.; Castelano, L. K.

    2018-03-01

    The optimal quantum control theory is employed to determine electric pulses capable of producing quantum gates with a fidelity higher than 0.9997, when noise is not taken into account. Particularly, these quantum gates were chosen to perform the permutation algorithm in hybrid qubits in double quantum dots (DQDs). The permutation algorithm is an oracle based quantum algorithm that solves the problem of the permutation parity faster than a classical algorithm without the necessity of entanglement between particles. The only requirement for achieving the speedup is the use of a one-particle quantum system with at least three levels. The high fidelity found in our results is closely related to the quantum speed limit, which is a measure of how fast a quantum state can be manipulated. Furthermore, we model charge noise by considering an average over the optimal field centered at different values of the reference detuning, which follows a Gaussian distribution. When the Gaussian spread is of the order of 5 μ eV (10% of the correct value), the fidelity is still higher than 0.95. Our scheme also can be used for the practical realization of different quantum algorithms in DQDs.

  20. Panel Flutter Emulation Using a Few Concentrated Forces

    NASA Astrophysics Data System (ADS)

    Dhital, Kailash; Han, Jae-Hung

    2018-04-01

    The objective of this paper is to study the feasibility of panel flutter emulation using a few concentrated forces. The concentrated forces are considered to be equivalent to aerodynamic forces. The equivalence is carried out using surface spline method and principle of virtual work. The structural modeling of the plate is based on the classical plate theory and the aerodynamic modeling is based on the piston theory. The present approach differs from the linear panel flutter analysis in scheming the modal aerodynamics forces with unchanged structural properties. The solutions for the flutter problem are obtained numerically using the standard eigenvalue procedure. A few concentrated forces were considered with an optimization effort to decide their optimal locations. The optimization process is based on minimizing the error between the flutter bounds from emulated and linear flutter analysis method. The emulated flutter results for the square plate of four different boundary conditions using six concentrated forces are obtained with minimal error to the reference value. The results demonstrated the workability and viability of using concentrated forces in emulating real panel flutter. In addition, the paper includes the parametric studies of linear panel flutter whose proper literatures are not available.

  1. Approximation of optimal filter for Ornstein-Uhlenbeck process with quantised discrete-time observation

    NASA Astrophysics Data System (ADS)

    Bania, Piotr; Baranowski, Jerzy

    2018-02-01

    Quantisation of signals is a ubiquitous property of digital processing. In many cases, it introduces significant difficulties in state estimation and in consequence control. Popular approaches either do not address properly the problem of system disturbances or lead to biased estimates. Our intention was to find a method for state estimation for stochastic systems with quantised and discrete observation, that is free of the mentioned drawbacks. We have formulated a general form of the optimal filter derived by a solution of Fokker-Planck equation. We then propose the approximation method based on Galerkin projections. We illustrate the approach for the Ornstein-Uhlenbeck process, and derive analytic formulae for the approximated optimal filter, also extending the results for the variant with control. Operation is illustrated with numerical experiments and compared with classical discrete-continuous Kalman filter. Results of comparison are substantially in favour of our approach, with over 20 times lower mean squared error. The proposed filter is especially effective for signal amplitudes comparable to the quantisation thresholds. Additionally, it was observed that for high order of approximation, state estimate is very close to the true process value. The results open the possibilities of further analysis, especially for more complex processes.

  2. Advantages of Unfair Quantum Ground-State Sampling.

    PubMed

    Zhang, Brian Hu; Wagenbreth, Gene; Martin-Mayor, Victor; Hen, Itay

    2017-04-21

    The debate around the potential superiority of quantum annealers over their classical counterparts has been ongoing since the inception of the field. Recent technological breakthroughs, which have led to the manufacture of experimental prototypes of quantum annealing optimizers with sizes approaching the practical regime, have reignited this discussion. However, the demonstration of quantum annealing speedups remains to this day an elusive albeit coveted goal. We examine the power of quantum annealers to provide a different type of quantum enhancement of practical relevance, namely, their ability to serve as useful samplers from the ground-state manifolds of combinatorial optimization problems. We study, both numerically by simulating stoquastic and non-stoquastic quantum annealing processes, and experimentally, using a prototypical quantum annealing processor, the ability of quantum annealers to sample the ground-states of spin glasses differently than thermal samplers. We demonstrate that (i) quantum annealers sample the ground-state manifolds of spin glasses very differently than thermal optimizers (ii) the nature of the quantum fluctuations driving the annealing process has a decisive effect on the final distribution, and (iii) the experimental quantum annealer samples ground-state manifolds significantly differently than thermal and ideal quantum annealers. We illustrate how quantum annealers may serve as powerful tools when complementing standard sampling algorithms.

  3. Predictive optimized adaptive PSS in a single machine infinite bus.

    PubMed

    Milla, Freddy; Duarte-Mermoud, Manuel A

    2016-07-01

    Power System Stabilizer (PSS) devices are responsible for providing a damping torque component to generators for reducing fluctuations in the system caused by small perturbations. A Predictive Optimized Adaptive PSS (POA-PSS) to improve the oscillations in a Single Machine Infinite Bus (SMIB) power system is discussed in this paper. POA-PSS provides the optimal design parameters for the classic PSS using an optimization predictive algorithm, which adapts to changes in the inputs of the system. This approach is part of small signal stability analysis, which uses equations in an incremental form around an operating point. Simulation studies on the SMIB power system illustrate that the proposed POA-PSS approach has better performance than the classical PSS. In addition, the effort in the control action of the POA-PSS is much less than that of other approaches considered for comparison. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Optimal Item Selection with Credentialing Examinations.

    ERIC Educational Resources Information Center

    Hambleton, Ronald K.; And Others

    The study compared two promising item response theory (IRT) item-selection methods, optimal and content-optimal, with two non-IRT item selection methods, random and classical, for use in fixed-length certification exams. The four methods were used to construct 20-item exams from a pool of approximately 250 items taken from a 1985 certification…

  5. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  6. A fictitious domain finite element method for simulations of fluid-structure interactions: The Navier-Stokes equations coupled with a moving solid

    NASA Astrophysics Data System (ADS)

    Court, Sébastien; Fournié, Michel

    2015-05-01

    The paper extends a stabilized fictitious domain finite element method initially developed for the Stokes problem to the incompressible Navier-Stokes equations coupled with a moving solid. This method presents the advantage to predict an optimal approximation of the normal stress tensor at the interface. The dynamics of the solid is governed by the Newton's laws and the interface between the fluid and the structure is materialized by a level-set which cuts the elements of the mesh. An algorithm is proposed in order to treat the time evolution of the geometry and numerical results are presented on a classical benchmark of the motion of a disk falling in a channel.

  7. Diagnosis, prevention, and treatment of scabies.

    PubMed

    Shimose, Luis; Munoz-Price, L Silvia

    2013-10-01

    Scabies remains a public health problem, especially in developing countries, with a worldwide incidence of approximately 300 million cases each year. Prolonged skin-to-skin contact is necessary to allow the transmission of the causative mite, Sarcoptes scabiei. Classic scabies presents with burrows, erythematous papules, and generalized pruritus. Clinical variants include nodular scabies and crusted scabies, also called Norwegian scabies. The diagnosis is based mainly on history and physical examination, but definitive diagnosis depends on direct visualization of the mites under microscopy. Alternative diagnostic methods include the burrow ink test, video-dermatoscopy, newly serologic tests like PCR/ELISA, and specific IgE directed toward major mite components. Treatment of scabies consists of either topical permethrin or oral ivermectin, although the optimal regimen is still unclear.

  8. Markowitz portfolio optimization model employing fuzzy measure

    NASA Astrophysics Data System (ADS)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  9. Classical and quantum dynamics in an inverse square potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guillaumín-España, Elisa, E-mail: ege@correo.azc.uam.mx; Núñez-Yépez, H. N., E-mail: nyhn@xanum.uam.mx; Salas-Brito, A. L., E-mail: asb@correo.azc.uam.mx

    2014-10-15

    The classical motion of a particle in a 3D inverse square potential with negative energy, E, is shown to be geodesic, i.e., equivalent to the particle's free motion on a non-compact phase space manifold irrespective of the sign of the coupling constant. We thus establish that all its classical orbits with E < 0 are unbounded. To analyse the corresponding quantum problem, the Schrödinger equation is solved in momentum space. No discrete energy levels exist in the unrenormalized case and the system shows a complete “fall-to-the-center” with an energy spectrum unbounded by below. Such behavior corresponds to the non-existence ofmore » bound classical orbits. The symmetry of the problem is SO(3) × SO(2, 1) corroborating previously obtained results.« less

  10. Geometrical optimization of the transmission and dispersion properties of arrayed waveguide gratings using two stigmatic point mountings.

    PubMed

    Muñoz, P; Pastor, D; Capmany, J; Martínez, A

    2003-09-22

    In this paper, the procedure to optimize flat-top Arrayed Waveguide Grating (AWG) devices in terms of transmission and dispersion properties is presented. The systematic procedure consists on the stigmatization and minimization of the Light Path Function (LPF) used in classic planar spectrograph theory. The resulting geometry arrangement for the Arrayed Waveguides (AW) and the Output Waveguides (OW) is not the classical Rowland mounting, but an arbitrary geometry arrangement. Simulation using previous published enhanced modeling show how this geometry reduces the passband ripple, asymmetry and dispersion, in a design example.

  11. A multilevel probabilistic beam search algorithm for the shortest common supersequence problem.

    PubMed

    Gallardo, José E

    2012-01-01

    The shortest common supersequence problem is a classical problem with many applications in different fields such as planning, Artificial Intelligence and especially in Bioinformatics. Due to its NP-hardness, we can not expect to efficiently solve this problem using conventional exact techniques. This paper presents a heuristic to tackle this problem based on the use at different levels of a probabilistic variant of a classical heuristic known as Beam Search. The proposed algorithm is empirically analysed and compared to current approaches in the literature. Experiments show that it provides better quality solutions in a reasonable time for medium and large instances of the problem. For very large instances, our heuristic also provides better solutions, but required execution times may increase considerably.

  12. Weighted community detection and data clustering using message passing

    NASA Astrophysics Data System (ADS)

    Shi, Cheng; Liu, Yanchen; Zhang, Pan

    2018-03-01

    Grouping objects into clusters based on the similarities or weights between them is one of the most important problems in science and engineering. In this work, by extending message-passing algorithms and spectral algorithms proposed for an unweighted community detection problem, we develop a non-parametric method based on statistical physics, by mapping the problem to the Potts model at the critical temperature of spin-glass transition and applying belief propagation to solve the marginals corresponding to the Boltzmann distribution. Our algorithm is robust to over-fitting and gives a principled way to determine whether there are significant clusters in the data and how many clusters there are. We apply our method to different clustering tasks. In the community detection problem in weighted and directed networks, we show that our algorithm significantly outperforms existing algorithms. In the clustering problem, where the data were generated by mixture models in the sparse regime, we show that our method works all the way down to the theoretical limit of detectability and gives accuracy very close to that of the optimal Bayesian inference. In the semi-supervised clustering problem, our method only needs several labels to work perfectly in classic datasets. Finally, we further develop Thouless-Anderson-Palmer equations which heavily reduce the computation complexity in dense networks but give almost the same performance as belief propagation.

  13. Spatiotemporal mapping of scalp potentials.

    PubMed

    Fender, D H; Santoro, T P

    1977-11-01

    Computerized analysis and display techniques are applied to the problem of identifying the origins of visually evoked scalped potentials (VESP's). A new stimulus for VESP work, white noise, is being incorporated in the solution of this problem. VESP's for white-noise stimulation exhibit time domain behavior similar to the classical response for flash stimuli but with certain significant differences. Contour mapping algorithms are used to display the time behavior of equipotential surfaces on the scalp during the VESP. The electrical and geometrical parameters of the head are modeled. Electrical fields closely matching those obtained experimentally are generated on the surface of the model head by optimally selecting the location and strength parameters of one or two dipole current sources contained within the model. Computer graphics are used to display as a movie the actual and model scalp potential field and the parameters of the dipole generators whithin the model head during the time course of the VESP. These techniques are currently used to study retinotopic mapping, fusion, and texture perception in the human.

  14. An Adiabatic Quantum Algorithm for Determining Gracefulness of a Graph

    NASA Astrophysics Data System (ADS)

    Hosseini, Sayed Mohammad; Davoudi Darareh, Mahdi; Janbaz, Shahrooz; Zaghian, Ali

    2017-07-01

    Graph labelling is one of the noticed contexts in combinatorics and graph theory. Graceful labelling for a graph G with e edges, is to label the vertices of G with 0, 1, ℒ, e such that, if we specify to each edge the difference value between its two ends, then any of 1, 2, ℒ, e appears exactly once as an edge label. For a given graph, there are still few efficient classical algorithms that determine either it is graceful or not, even for trees - as a well-known class of graphs. In this paper, we introduce an adiabatic quantum algorithm, which for a graceful graph G finds a graceful labelling. Also, this algorithm can determine if G is not graceful. Numerical simulations of the algorithm reveal that its time complexity has a polynomial behaviour with the problem size up to the range of 15 qubits. A general sufficient condition for a combinatorial optimization problem to have a satisfying adiabatic solution is also derived.

  15. Time Domain Propagation of Quantum and Classical Systems using a Wavelet Basis Set Method

    NASA Astrophysics Data System (ADS)

    Lombardini, Richard; Nowara, Ewa; Johnson, Bruce

    2015-03-01

    The use of an orthogonal wavelet basis set (Optimized Maximum-N Generalized Coiflets) to effectively model physical systems in the time domain, in particular the electromagnetic (EM) pulse and quantum mechanical (QM) wavefunction, is examined in this work. Although past research has demonstrated the benefits of wavelet basis sets to handle computationally expensive problems due to their multiresolution properties, the overlapping supports of neighboring wavelet basis functions poses problems when dealing with boundary conditions, especially with material interfaces in the EM case. Specifically, this talk addresses this issue using the idea of derivative matching creating fictitious grid points (T.A. Driscoll and B. Fornberg), but replaces the latter element with fictitious wavelet projections in conjunction with wavelet reconstruction filters. Two-dimensional (2D) systems are analyzed, EM pulse incident on silver cylinders and the QM electron wave packet circling the proton in a hydrogen atom system (reduced to 2D), and the new wavelet method is compared to the popular finite-difference time-domain technique.

  16. Apollo 13 creativity: in-the-box innovation.

    PubMed

    King, M J

    1997-01-01

    A study of the Apollo 13 mission, based on the themes showcased in the acclaimed 1995 film, reveals the grace under pressure that is the condition of optimal creativity. "Apollo 13 Creativity" is a cultural and creative problem-solving appreciation of the thinking style that made the Apollo mission succeed: creativity under severe limitations. Although creativity is often considered a "luxury good," of concern mainly for personal enrichment, the arts, and performance improvement, in life-or-death situations it is the critical pathway not only to success but to survival. In this case. the original plan for a moon landing had to be transformed within a matter of hours into a return to earth. By precluding failure as an option at the outset, both space and ground crews were forced to adopt a new perspective on their resources and options to solve for a successful landing. This now-classic problem provides a range of principles for creative practice and motivation applicable in any situation. The extreme situation makes these points dramatically.

  17. Time-reversal and Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  18. Perturbational and nonperturbational inversion of Rayleigh-wave velocities

    USGS Publications Warehouse

    Haney, Matt; Tsai, Victor C.

    2017-01-01

    The inversion of Rayleigh-wave dispersion curves is a classic geophysical inverse problem. We have developed a set of MATLAB codes that performs forward modeling and inversion of Rayleigh-wave phase or group velocity measurements. We describe two different methods of inversion: a perturbational method based on finite elements and a nonperturbational method based on the recently developed Dix-type relation for Rayleigh waves. In practice, the nonperturbational method can be used to provide a good starting model that can be iteratively improved with the perturbational method. Although the perturbational method is well-known, we solve the forward problem using an eigenvalue/eigenvector solver instead of the conventional approach of root finding. Features of the codes include the ability to handle any mix of phase or group velocity measurements, combinations of modes of any order, the presence of a surface water layer, computation of partial derivatives due to changes in material properties and layer boundaries, and the implementation of an automatic grid of layers that is optimally suited for the depth sensitivity of Rayleigh waves.

  19. On the diverse roles of fluid dynamic drag in animal swimming and flying

    PubMed Central

    2018-01-01

    Questions of energy dissipation or friction appear immediately when addressing the problem of a body moving in a fluid. For the most simple problems, involving a constant steady propulsive force on the body, a straightforward relation can be established balancing this driving force with a skin friction or form drag, depending on the Reynolds number and body geometry. This elementary relation closes the full dynamical problem and sets, for instance, average cruising velocity or energy cost. In the case of finite-sized and time-deformable bodies though, such as flapping flyers or undulatory swimmers, the comprehension of driving/dissipation interactions is not straightforward. The intrinsic unsteadiness of the flapping and deforming animal bodies complicates the usual application of classical fluid dynamic forces balance. One of the complications is because the shape of the body is indeed changing in time, accelerating and decelerating perpetually, but also because the role of drag (more specifically the role of the local drag) has two different facets, contributing at the same time to global dissipation and to driving forces. This causes situations where a strong drag is not necessarily equivalent to inefficient systems. A lot of living systems are precisely using strong sources of drag to optimize their performance. In addition to revisiting classical results under the light of recent research on these questions, we discuss in this review the crucial role of drag from another point of view that concerns the fluid–structure interaction problem of animal locomotion. We consider, in particular, the dynamic subtleties brought by the quadratic drag that resists transverse motions of a flexible body or appendage performing complex kinematics, such as the phase dynamics of a flexible flapping wing, the propagative nature of the bending wave in undulatory swimmers, or the surprising relevance of drag-based resistive thrust in inertial swimmers. PMID:29445037

  20. Data Analysis Techniques for Physical Scientists

    NASA Astrophysics Data System (ADS)

    Pruneau, Claude A.

    2017-10-01

    Preface; How to read this book; 1. The scientific method; Part I. Foundation in Probability and Statistics: 2. Probability; 3. Probability models; 4. Classical inference I: estimators; 5. Classical inference II: optimization; 6. Classical inference III: confidence intervals and statistical tests; 7. Bayesian inference; Part II. Measurement Techniques: 8. Basic measurements; 9. Event reconstruction; 10. Correlation functions; 11. The multiple facets of correlation functions; 12. Data correction methods; Part III. Simulation Techniques: 13. Monte Carlo methods; 14. Collision and detector modeling; List of references; Index.

Top