Sample records for constrained optimization techniques

  1. Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings

    NASA Astrophysics Data System (ADS)

    Mader, Charles Alexander

    A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.

  2. Constrained minimization of smooth functions using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Moerder, Daniel D.; Pamadi, Bandu N.

    1994-01-01

    The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.

  3. An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function

    NASA Technical Reports Server (NTRS)

    Wrenn, Gregory A.

    1989-01-01

    A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.

  4. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  5. High resolution near on-axis digital holography using constrained optimization approach with faster convergence

    NASA Astrophysics Data System (ADS)

    Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu

    2017-09-01

    A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.

  6. Necessary conditions for the optimality of variable rate residual vector quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.

  7. Fitting Prony Series To Data On Viscoelastic Materials

    NASA Technical Reports Server (NTRS)

    Hill, S. A.

    1995-01-01

    Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.

  8. Acceleration techniques in the univariate Lipschitz global optimization

    NASA Astrophysics Data System (ADS)

    Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela

    2016-10-01

    Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.

  9. Constrained optimization of image restoration filters

    NASA Technical Reports Server (NTRS)

    Riemer, T. E.; Mcgillem, C. D.

    1973-01-01

    A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.

  10. Determination of the Conservation Time of Periodicals for Optimal Shelf Maintenance of a Library.

    ERIC Educational Resources Information Center

    Miyamoto, Sadaaki; Nakayama, Kazuhiko

    1981-01-01

    Presents a method based on a constrained optimization technique that determines the time of removal of scientific periodicals from the shelf of a library. A geometrical interpretation of the theoretical result is given, and a numerical example illustrates how the technique is applicable to real bibliographic data. (FM)

  11. Trajectory optimization and guidance law development for national aerospace plane applications

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Flandro, G. A.; Corban, J. E.

    1988-01-01

    The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.

  12. A "Reverse-Schur" Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design.

    PubMed

    Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.

  13. A “Reverse-Schur” Approach to Optimization With Linear PDE Constraints: Application to Biomolecule Analysis and Design

    PubMed Central

    Bardhan, Jaydeep P.; Altman, Michael D.

    2009-01-01

    We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839

  14. Stress-Constrained Structural Topology Optimization with Design-Dependent Loads

    NASA Astrophysics Data System (ADS)

    Lee, Edmund

    Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.

  15. A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Ortiz, Francisco

    2004-01-01

    COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.

  16. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    PubMed

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.

  17. Solution techniques for transient stability-constrained optimal power flow – Part II

    DOE PAGES

    Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu; ...

    2017-06-28

    Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.

  18. Solution techniques for transient stability-constrained optimal power flow – Part II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu

    Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.

  19. Convex optimisation approach to constrained fuel optimal control of spacecraft in close relative motion

    NASA Astrophysics Data System (ADS)

    Massioni, Paolo; Massari, Mauro

    2018-05-01

    This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.

  20. Constrained multi-objective optimization of storage ring lattices

    NASA Astrophysics Data System (ADS)

    Husain, Riyasat; Ghodke, A. D.

    2018-03-01

    The storage ring lattice optimization is a class of constrained multi-objective optimization problem, where in addition to low beam emittance, a large dynamic aperture for good injection efficiency and improved beam lifetime are also desirable. The convergence and computation times are of great concern for the optimization algorithms, as various objectives are to be optimized and a number of accelerator parameters to be varied over a large span with several constraints. In this paper, a study of storage ring lattice optimization using differential evolution is presented. The optimization results are compared with two most widely used optimization techniques in accelerators-genetic algorithm and particle swarm optimization. It is found that the differential evolution produces a better Pareto optimal front in reasonable computation time between two conflicting objectives-beam emittance and dispersion function in the straight section. The differential evolution was used, extensively, for the optimization of linear and nonlinear lattices of Indus-2 for exploring various operational modes within the magnet power supply capabilities.

  1. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  2. CLFs-based optimization control for a class of constrained visual servoing systems.

    PubMed

    Song, Xiulan; Miaomiao, Fu

    2017-03-01

    In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  3. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  4. Calculation of Pareto-optimal solutions to multiple-objective problems using threshold-of-acceptability constraints

    NASA Technical Reports Server (NTRS)

    Giesy, D. P.

    1978-01-01

    A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.

  5. An efficient interior-point algorithm with new non-monotone line search filter method for nonlinear constrained programming

    NASA Astrophysics Data System (ADS)

    Wang, Liwei; Liu, Xinggao; Zhang, Zeyin

    2017-02-01

    An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.

  6. Optimal mistuning for enhanced aeroelastic stability of transonic fans

    NASA Technical Reports Server (NTRS)

    Hall, K. C.; Crawley, E. F.

    1983-01-01

    An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.

  7. Optimal Guaranteed Cost Sliding Mode Control for Constrained-Input Nonlinear Systems With Matched and Unmatched Disturbances.

    PubMed

    Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang

    2018-06-01

    Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.

  8. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.

  9. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two output drone flight control system.

  10. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  11. A prototype upper-atmospheric data assimilation scheme based on optimal interpolation: 2. Numerical experiments

    NASA Astrophysics Data System (ADS)

    Akmaev, R. a.

    1999-04-01

    In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).

  12. Multidisciplinary Optimization Approach for Design and Operation of Constrained and Complex-shaped Space Systems

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young

    The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue, for the ADCS operations, controllers based on Model Predictive Control that are effective for constraint handling were developed and implemented. All the suggested design and operation methodologies are applied to a mission "CADRE", which is space weather mission scheduled for operation in 2016. This application demonstrates the usefulness and capability of the methodology to enhance CADRE's capabilities, and its ability to be applied to a variety of missions.

  13. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pablant, N. A.; Bell, R. E.; Bitter, M.

    2014-11-15

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at the Large Helical Device. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy andmore » tomographic inversion, XICS can provide profile measurements of the local emissivity, temperature, and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modified Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example, geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  14. Tomographic inversion techniques incorporating physical constraints for line integrated spectroscopy in stellarators and tokamaksa)

    DOE PAGES

    Pablant, N. A.; Bell, R. E.; Bitter, M.; ...

    2014-08-08

    Accurate tomographic inversion is important for diagnostic systems on stellarators and tokamaks which rely on measurements of line integrated emission spectra. A tomographic inversion technique based on spline optimization with enforcement of constraints is described that can produce unique and physically relevant inversions even in situations with noisy or incomplete input data. This inversion technique is routinely used in the analysis of data from the x-ray imaging crystal spectrometer (XICS) installed at LHD. The XICS diagnostic records a 1D image of line integrated emission spectra from impurities in the plasma. Through the use of Doppler spectroscopy and tomographic inversion, XICSmore » can provide pro file measurements of the local emissivity, temperature and plasma flow. Tomographic inversion requires the assumption that these measured quantities are flux surface functions, and that a known plasma equilibrium reconstruction is available. In the case of low signal levels or partial spatial coverage of the plasma cross-section, standard inversion techniques utilizing matrix inversion and linear-regularization often cannot produce unique and physically relevant solutions. The addition of physical constraints, such as parameter ranges, derivative directions, and boundary conditions, allow for unique solutions to be reliably found. The constrained inversion technique described here utilizes a modifi ed Levenberg-Marquardt optimization scheme, which introduces a condition avoidance mechanism by selective reduction of search directions. The constrained inversion technique also allows for the addition of more complicated parameter dependencies, for example geometrical dependence of the emissivity due to asymmetries in the plasma density arising from fast rotation. The accuracy of this constrained inversion technique is discussed, with an emphasis on its applicability to systems with limited plasma coverage.« less

  15. Optimization of Stability Constrained Geometrically Nonlinear Shallow Trusses Using an Arc Length Sparse Method with a Strain Energy Density Approach

    NASA Technical Reports Server (NTRS)

    Hrinda, Glenn A.; Nguyen, Duc T.

    2008-01-01

    A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.

  16. OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods

    NASA Technical Reports Server (NTRS)

    Heath, Christopher M.; Gray, Justin S.

    2012-01-01

    The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.

  17. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  18. Fluence map optimization (FMO) with dose-volume constraints in IMRT using the geometric distance sorting method.

    PubMed

    Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang

    2012-10-21

    A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.

  19. Constrained trajectory optimization for kinematically redundant arms

    NASA Technical Reports Server (NTRS)

    Carignan, Craig R.; Tarrant, Janice M.

    1990-01-01

    Two velocity optimization schemes for resolving redundant joint configurations are compared. The Extended Moore-Penrose Technique minimizes the joint velocities and avoids obstacles indirectly by adjoining a cost gradient to the solution. A new method can incorporate inequality constraints directly to avoid obstacles and singularities in the workspace. A four-link arm example is used to illustrate singularity avoidance while tracking desired end-effector paths.

  20. On the utilization of engineering knowledge in design optimization

    NASA Technical Reports Server (NTRS)

    Papalambros, P.

    1984-01-01

    Some current research work conducted at the University of Michigan is described to illustrate efforts for incorporating knowledge in optimization in a nontraditional way. The incorporation of available knowledge in a logic structure is examined in two circumstances. The first examines the possibility of introducing global design information in a local active set strategy implemented during the iterations of projection-type algorithms for nonlinearly constrained problems. The technique used algorithms for nonlinearly constrained problems. The technique used combines global and local monotinicity analysis of the objective and constraint functions. The second examines a knowledge-based program which aids the user to create condigurations that are most desirable from the manufacturing assembly viewpoint. The data bank used is the classification scheme suggested by Boothroyd. The important aspect of this program is that it is an aid for synthesis intended for use in the design concept phase in a way similar to the so-called idea-triggers in creativity-enhancement techniques like brain-storming. The idea generation, however, is not random but it is driven by the goal of achieving the best acceptable configuration.

  1. SU-G-BRA-08: Diaphragm Motion Tracking Based On KV CBCT Projections with a Constrained Linear Regression Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, J; Chao, M

    2016-06-15

    Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less

  2. A greedy algorithm for species selection in dimension reduction of combustion chemistry

    NASA Astrophysics Data System (ADS)

    Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.

    2010-09-01

    Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.

  3. The reduced space Sequential Quadratic Programming (SQP) method for calculating the worst resonance response of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao; Wu, Wenwang; Fang, Daining

    2018-07-01

    A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.

  4. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  5. Role of slack variables in quasi-Newton methods for constrained optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tapia, R.A.

    In constrained optimization the technique of converting an inequality constraint into an equality constraint by the addition of a squared slack variable is well known but rarely used. In choosing an active constraint philosophy over the slack variable approach, researchers quickly justify their choice with the standard criticisms: the slack variable approach increases the dimension of the problem, is numerically unstable, and gives rise to singular systems. It is shown that these criticisms of the slack variable approach need not apply and the two seemingly distinct approaches are actually very closely related. In fact, the squared slack variable formulation canmore » be used to develop a superior and more comprehensive active constraint philosophy.« less

  6. Memory and Energy Optimization Strategies for Multithreaded Operating System on the Resource-Constrained Wireless Sensor Node

    PubMed Central

    Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng

    2015-01-01

    Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264

  7. Optimal Sensor Management and Signal Processing for New EMI Systems

    DTIC Science & Technology

    2010-09-01

    adaptive techniques that would improve the speed of data collection and increase the mobility of a TEMTADS system. Although an active learning technique...data, SIG has simulated the active selection based on the data already collected at Camp SLO. In this setup, the active learning approach was constrained...to work only on a 5x5 grid (corresponding to twenty five transmitters and co-located receivers). The first technique assumes that active learning will

  8. Applications of Evolutionary Technology to Manufacturing and Logistics Systems : State-of-the Art Survey

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin

    Many combinatorial optimization problems from industrial engineering and operations research in real-world are very complex in nature and quite hard to solve them by conventional techniques. Since the 1960s, there has been an increasing interest in imitating living beings to solve such kinds of hard combinatorial optimization problems. Simulating the natural evolutionary process of human beings results in stochastic optimization techniques called evolutionary algorithms (EAs), which can often outperform conventional optimization methods when applied to difficult real-world problems. In this survey paper, we provide a comprehensive survey of the current state-of-the-art in the use of EA in manufacturing and logistics systems. In order to demonstrate the EAs which are powerful and broadly applicable stochastic search and optimization techniques, we deal with the following engineering design problems: transportation planning models, layout design models and two-stage logistics models in logistics systems; job-shop scheduling, resource constrained project scheduling in manufacturing system.

  9. PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Satyabrata

    We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratiomore » (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.« less

  10. A Numerical Comparison of Barrier and Modified Barrier Methods for Large-Scale Bound-Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Nash, Stephen G.; Polyak, R.; Sofer, Ariela

    1994-01-01

    When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.

  11. Improved helicopter aeromechanical stability analysis using segmented constrained layer damping and hybrid optimization

    NASA Astrophysics Data System (ADS)

    Liu, Qiang; Chattopadhyay, Aditi

    2000-06-01

    Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.

  12. Optimal vibration control of a rotating plate with self-sensing active constrained layer damping

    NASA Astrophysics Data System (ADS)

    Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng

    2012-04-01

    This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.

  13. Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.

    PubMed

    Heydari, Ali; Balakrishnan, Sivasubramanya N

    2013-01-01

    To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.

  14. Improving Robot Locomotion Through Learning Methods for Expensive Black-Box Systems

    DTIC Science & Technology

    2013-11-01

    development of a class of “gradient free” optimization techniques; these include local approaches, such as a Nelder- Mead simplex search (c.f. [73]), and global...1Note that this simple method differs from the Nelder Mead constrained nonlinear optimization method [73]. 39 the Non-dominated Sorting Genetic Algorithm...Kober, and Jan Peters. Model-free inverse reinforcement learning. In International Conference on Artificial Intelligence and Statistics, 2011. [12] George

  15. The use of optimization techniques to design controlled diffusion compressor blading

    NASA Technical Reports Server (NTRS)

    Sanger, N. L.

    1982-01-01

    A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.

  16. An adaptive finite element method for the inequality-constrained Reynolds equation

    NASA Astrophysics Data System (ADS)

    Gustafsson, Tom; Rajagopal, Kumbakonam R.; Stenberg, Rolf; Videman, Juha

    2018-07-01

    We present a stabilized finite element method for the numerical solution of cavitation in lubrication, modeled as an inequality-constrained Reynolds equation. The cavitation model is written as a variable coefficient saddle-point problem and approximated by a residual-based stabilized method. Based on our recent results on the classical obstacle problem, we present optimal a priori estimates and derive novel a posteriori error estimators. The method is implemented as a Nitsche-type finite element technique and shown in numerical computations to be superior to the usually applied penalty methods.

  17. WE-AB-209-12: Quasi Constrained Multi-Criteria Optimization for Automated Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, W.T.; Siebers, J.V.

    Purpose: To introduce quasi-constrained Multi-Criteria Optimization (qcMCO) for unsupervised radiation therapy optimization which generates alternative patient-specific plans emphasizing dosimetric tradeoffs and conformance to clinical constraints for multiple delivery techniques. Methods: For N Organs At Risk (OARs) and M delivery techniques, qcMCO generates M(N+1) alternative treatment plans per patient. Objective weight variations for OARs and targets are used to generate alternative qcMCO plans. For 30 locally advanced lung cancer patients, qcMCO plans were generated for dosimetric tradeoffs to four OARs: each lung, heart, and esophagus (N=4) and 4 delivery techniques (simple 4-field arrangements, 9-field coplanar IMRT, 27-field non-coplanar IMRT, and non-coplanarmore » Arc IMRT). Quasi-constrained objectives included target prescription isodose to 95% (PTV-D95), maximum PTV dose (PTV-Dmax)< 110% of prescription, and spinal cord Dmax<45 Gy. The algorithm’s ability to meet these constraints while simultaneously revealing dosimetric tradeoffs was investigated. Statistically significant dosimetric tradeoffs were defined such that the coefficient of determination between dosimetric indices which varied by at least 5 Gy between different plans was >0.8. Results: The qcMCO plans varied mean dose by >5 Gy to ipsilateral lung for 24/30 patients, contralateral lung for 29/30 patients, esophagus for 29/30 patients, and heart for 19/30 patients. In the 600 plans computed without human interaction, average PTV-D95=67.4±3.3 Gy, PTV-Dmax=79.2±5.3 Gy, and spinal cord Dmax was >45 Gy in 93 plans (>50 Gy in 2/600 plans). Statistically significant dosimetric tradeoffs were evident in 19/30 plans, including multiple tradeoffs of at least 5 Gy between multiple OARs in 7/30 cases. The most common statistically significant tradeoff was increasing PTV-Dmax to reduce OAR dose (15/30 patients). Conclusion: The qcMCO method can conform to quasi-constrained objectives while revealing significant variations in OAR doses including mean dose reductions >5 Gy. Clinical implementation will facilitate patient-specific decision making based on achievable dosimetry as opposed to accept/reject models based on population derived objectives.« less

  18. Unscented Sampling Techniques For Evolutionary Computation With Applications To Astrodynamic Optimization

    DTIC Science & Technology

    2016-09-01

    to both genetic algorithms and evolution strategies to achieve these goals. The results of this research offer a promising new set of modified ...abs_all.jsp?arnumber=203904 [163] Z. Michalewicz, C. Z. Janikow, and J. B. Krawczyk, “A modified genetic algo- rithm for optimal control problems...Available: http://arc.aiaa.org/doi/abs/10.2514/ 2.7053 375 [166] N. Yokoyama and S. Suzuki, “ Modified genetic algorithm for constrained trajectory

  19. Design of a candidate flutter suppression control law for DAST ARW-2. [Drones for Aerodynamic and Structural Testing Aeroelastic Research Wing

    NASA Technical Reports Server (NTRS)

    Adams, W. M., Jr.; Tiffany, S. H.

    1983-01-01

    A control law is developed to suppress symmetric flutter for a mathematical model of an aeroelastic research vehicle. An implementable control law is attained by including modified LQG (linear quadratic Gaussian) design techniques, controller order reduction, and gain scheduling. An alternate (complementary) design approach is illustrated for one flight condition wherein nongradient-based constrained optimization techniques are applied to maximize controller robustness.

  20. Smooth Constrained Heuristic Optimization of a Combinatorial Chemical Space

    DTIC Science & Technology

    2015-05-01

    ARL-TR-7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...

  1. Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David

    2016-01-01

    Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.

  2. Multivariate constrained shape optimization: Application to extrusion bell shape for pasta production

    NASA Astrophysics Data System (ADS)

    Sarghini, Fabrizio; De Vivo, Angela; Marra, Francesco

    2017-10-01

    Computational science and engineering methods have allowed a major change in the way products and processes are designed, as validated virtual models - capable to simulate physical, chemical and bio changes occurring during production processes - can be realized and used in place of real prototypes and performing experiments, often time and money consuming. Among such techniques, Optimal Shape Design (OSD) (Mohammadi & Pironneau, 2004) represents an interesting approach. While most classical numerical simulations consider fixed geometrical configurations, in OSD a certain number of geometrical degrees of freedom is considered as a part of the unknowns: this implies that the geometry is not completely defined, but part of it is allowed to move dynamically in order to minimize or maximize the objective function. The applications of optimal shape design (OSD) are uncountable. For systems governed by partial differential equations, they range from structure mechanics to electromagnetism and fluid mechanics or to a combination of the three. This paper presents one of possible applications of OSD, particularly how extrusion bell shape, for past production, can be designed by applying a multivariate constrained shape optimization.

  3. Digital robust control law synthesis using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivekananda

    1989-01-01

    Development of digital robust control laws for active control of high performance flexible aircraft and large space structures is a research area of significant practical importance. The flexible system is typically modeled by a large order state space system of equations in order to accurately represent the dynamics. The active control law must satisy multiple conflicting design requirements and maintain certain stability margins, yet should be simple enough to be implementable on an onboard digital computer. Described here is an application of a generic digital control law synthesis procedure for such a system, using optimal control theory and constrained optimization technique. A linear quadratic Gaussian type cost function is minimized by updating the free parameters of the digital control law, while trying to satisfy a set of constraints on the design loads, responses and stability margins. Analytical expressions for the gradients of the cost function and the constraints with respect to the control law design variables are used to facilitate rapid numerical convergence. These gradients can be used for sensitivity study and may be integrated into a simultaneous structure and control optimization scheme.

  4. Top-down estimate of dust emissions through integration of MODIS and MISR aerosol retrievals with the GEOS-Chem adjoint model

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping

    2012-04-01

    Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOS-Chem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.

  5. Top-down Estimate of Dust Emissions Through Integration of MODIS and MISR Aerosol Retrievals With the Geos-chem Adjoint Model

    NASA Technical Reports Server (NTRS)

    Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping

    2012-01-01

    Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOSChem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.

  6. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  7. Joint Resource Optimization for Cognitive Sensor Networks with SWIPT-Enabled Relay.

    PubMed

    Lu, Weidang; Lin, Yuanrong; Peng, Hong; Nan, Tian; Liu, Xin

    2017-09-13

    Energy-constrained wireless networks, such as wireless sensor networks (WSNs), are usually powered by fixed energy supplies (e.g., batteries), which limits the operation time of networks. Simultaneous wireless information and power transfer (SWIPT) is a promising technique to prolong the lifetime of energy-constrained wireless networks. This paper investigates the performance of an underlay cognitive sensor network (CSN) with SWIPT-enabled relay node. In the CSN, the amplify-and-forward (AF) relay sensor node harvests energy from the ambient radio-frequency (RF) signals using power splitting-based relaying (PSR) protocol. Then, it helps forward the signal of source sensor node (SSN) to the destination sensor node (DSN) by using the harvested energy. We study the joint resource optimization including the transmit power and power splitting ratio to maximize CSN's achievable rate with the constraint that the interference caused by the CSN to the primary users (PUs) is within the permissible threshold. Simulation results show that the performance of our proposed joint resource optimization can be significantly improved.

  8. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    NASA Astrophysics Data System (ADS)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  9. Application of decomposition techniques to the preliminary design of a transport aircraft

    NASA Technical Reports Server (NTRS)

    Rogan, J. E.; Kolb, M. A.

    1987-01-01

    A nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been formulated. A multifaceted decomposition of the optimization problem has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.

  10. Constrained Optimization Methods in Health Services Research-An Introduction: Report 1 of the ISPOR Optimization Methods Emerging Good Practices Task Force.

    PubMed

    Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S

    2017-03-01

    Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Adaptive nearly optimal control for a class of continuous-time nonaffine nonlinear systems with inequality constraints.

    PubMed

    Fan, Quan-Yong; Yang, Guang-Hong

    2017-01-01

    The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Digital robust active control law synthesis for large order flexible structure using parameter optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.

    1988-01-01

    A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.

  13. Constrained Burn Optimization for the International Space Station

    NASA Technical Reports Server (NTRS)

    Brown, Aaron J.; Jones, Brandon A.

    2017-01-01

    In long-term trajectory planning for the International Space Station (ISS), translational burns are currently targeted sequentially to meet the immediate trajectory constraints, rather than simultaneously to meet all constraints, do not employ gradient-based search techniques, and are not optimized for a minimum total deltav (v) solution. An analytic formulation of the constraint gradients is developed and used in an optimization solver to overcome these obstacles. Two trajectory examples are explored, highlighting the advantage of the proposed method over the current approach, as well as the potential v and propellant savings in the event of propellant shortages.

  14. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  15. A framework to determine the locations of the environmental monitoring in an estuary of the Yellow Sea.

    PubMed

    Kim, Nam-Hoon; Hwang, Jin Hwan; Cho, Jaegab; Kim, Jae Seong

    2018-06-04

    The characteristics of an estuary are determined by various factors as like as tide, wave, river discharge, etc. which also control the water quality of the estuary. Therefore, detecting the changes of characteristics is critical in managing the environmental qualities and pollution and so the locations of monitoring should be selected carefully. The present study proposes a framework to deploy the monitoring systems based on a graphical method of the spatial and temporal optimizations. With the well-validated numerical simulation results, the monitoring locations are determined to capture the changes of water qualities and pollutants depending on the variations of tide, current and freshwater discharge. The deployment strategy to find the appropriate monitoring locations is designed with the constrained optimization method, which finds solutions by constraining the objective function into the feasible regions. The objective and constrained functions are constructed with the interpolation technique such as objective analysis. Even with the smaller number of the monitoring locations, the present method performs well equivalently to the arbitrarily and evenly deployed monitoring system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Conjugate gradient determination of optimal plane changes for a class of three-impulse transfers between noncoplanar circular orbits

    NASA Technical Reports Server (NTRS)

    Burrows, R. R.

    1972-01-01

    A particular type of three-impulse transfer between two circular orbits is analyzed. The possibility of three plane changes is recognized, and the problem is to optimally distribute these plane changes to minimize the sum of the individual impulses. Numerical difficulties and their solution are discussed. Numerical results obtained from a conjugate gradient technique are presented for both the case where the individual plane changes are unconstrained and for the case where they are constrained. Possibly not unexpectedly, multiple minima are found. The techniques presented could be extended to the finite burn case, but primarily the contents are addressed to preliminary mission design and vehicle sizing.

  17. A Model for Engaging Students in a Research Experience Involving Variational Techniques, Mathematica, and Descent Methods.

    ERIC Educational Resources Information Center

    Mahavier, W. Ted

    2002-01-01

    Describes a two-semester numerical methods course that serves as a research experience for undergraduate students without requiring external funding or the modification of current curriculum. Uses an engineering problem to introduce students to constrained optimization via a variation of the traditional isoperimetric problem of finding the curve…

  18. MO-FG-CAMPUS-TeP2-01: A Graph Form ADMM Algorithm for Constrained Quadratic Radiation Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, X; Belcher, AH; Wiersma, R

    Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less

  19. Conceptual Comparison of Population Based Metaheuristics for Engineering Problems

    PubMed Central

    Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes. PMID:25874265

  20. Conceptual comparison of population based metaheuristics for engineering problems.

    PubMed

    Adekanmbi, Oluwole; Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.

  1. Pareto-Optimal Estimates of California Precipitation Change

    NASA Astrophysics Data System (ADS)

    Langenbrunner, Baird; Neelin, J. David

    2017-12-01

    In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.

  2. The design of multirate digital control systems

    NASA Technical Reports Server (NTRS)

    Berg, M. C.

    1986-01-01

    The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.

  3. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    PubMed

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.

  4. Pseudo-time methods for constrained optimization problems governed by PDE

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1995-01-01

    In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.

  5. Synthesis of aircraft structures using integrated design and analysis methods

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Goetz, R. C.

    1978-01-01

    A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.

  6. Interior point techniques for LP and NLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evtushenko, Y.

    By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.

  7. A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems

    DOE PAGES

    Kouri, Drew Philip

    2017-12-19

    In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less

  8. Fitting aerodynamic forces in the Laplace domain: An application of a nonlinear nongradient technique to multilevel constrained optimization

    NASA Technical Reports Server (NTRS)

    Tiffany, S. H.; Adams, W. M., Jr.

    1984-01-01

    A technique which employs both linear and nonlinear methods in a multilevel optimization structure to best approximate generalized unsteady aerodynamic forces for arbitrary motion is described. Optimum selection of free parameters is made in a rational function approximation of the aerodynamic forces in the Laplace domain such that a best fit is obtained, in a least squares sense, to tabular data for purely oscillatory motion. The multilevel structure and the corresponding formulation of the objective models are presented which separate the reduction of the fit error into linear and nonlinear problems, thus enabling the use of linear methods where practical. Certain equality and inequality constraints that may be imposed are identified; a brief description of the nongradient, nonlinear optimizer which is used is given; and results which illustrate application of the method are presented.

  9. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  10. Efficient Transition State Optimization of Periodic Structures through Automated Relaxed Potential Energy Surface Scans.

    PubMed

    Plessow, Philipp N

    2018-02-13

    This work explores how constrained linear combinations of bond lengths can be used to optimize transition states in periodic structures. Scanning of constrained coordinates is a standard approach for molecular codes with localized basis functions, where a full set of internal coordinates is used for optimization. Common plane wave-codes for periodic boundary conditions almost exlusively rely on Cartesian coordinates. An implementation of constrained linear combinations of bond lengths with Cartesian coordinates is described. Along with an optimization of the value of the constrained coordinate toward the transition states, this allows transition optimization within a single calculation. The approach is suitable for transition states that can be well described in terms of broken and formed bonds. In particular, the implementation is shown to be effective and efficient in the optimization of transition states in zeolite-catalyzed reactions, which have high relevance in industrial processes.

  11. Minimal complexity control law synthesis

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.

    1989-01-01

    A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.

  12. Optimizing Department of Defense Acquisition Development Test and Evaluation Scheduling

    DTIC Science & Technology

    2015-06-01

    CPM Critical Path Method DOD Department of Defense DT&E development test and evaluation EMD engineering and manufacturing development GAMS...these, including the Program Evaluation Review Technique (PERT), the Critical Path Method ( CPM ), and the resource- constrained project-scheduling...problem (RCPSP). These are of particular interest to this thesis as the current scheduling method uses elements of the PERT/ CPM , and the test

  13. AMANDA Observations Constrain the Ultrahigh Energy Neutrino Flux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halzen, Francis; /Wisconsin U., Madison; Hooper, Dan

    2006-05-01

    A number of experimental techniques are currently being deployed in an effort to make the first detection of ultra-high energy cosmic neutrinos. To accomplish this goal, techniques using radio and acoustic detectors are being developed, which are optimally designed for studying neutrinos with energies in the PeV-EeV range and above. Data from the AMANDA experiment, in contrast, has been used to place limits on the cosmic neutrino flux at less extreme energies (up to {approx}10 PeV). In this letter, we show that by adopting a different analysis strategy, optimized for much higher energy neutrinos, the same AMANDA data can bemore » used to place a limit competitive with radio techniques at EeV energies. We also discuss the sensitivity of the IceCube experiment, in various stages of deployment, to ultra-high energy neutrinos.« less

  14. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    PubMed

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  15. Control law synthesis and optimization software for large order aeroservoelastic systems

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Pototzky, A.; Noll, Thomas

    1989-01-01

    A flexible aircraft or space structure with active control is typically modeled by a large-order state space system of equations in order to accurately represent the rigid and flexible body modes, unsteady aerodynamic forces, actuator dynamics and gust spectra. The control law of this multi-input/multi-output (MIMO) system is expected to satisfy multiple design requirements on the dynamic loads, responses, actuator deflection and rate limitations, as well as maintain certain stability margins, yet should be simple enough to be implemented on an onboard digital microprocessor. A software package for performing an analog or digital control law synthesis for such a system, using optimal control theory and constrained optimization techniques is described.

  16. Point-based warping with optimized weighting factors of displacement vectors

    NASA Astrophysics Data System (ADS)

    Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas

    2000-06-01

    The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.

  17. Publications | Grid Modernization | NREL

    Science.gov Websites

    Photovoltaics: Trajectories and Challenges Cover of Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow publication Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow

  18. A method to stabilize linear systems using eigenvalue gradient information

    NASA Technical Reports Server (NTRS)

    Wieseman, C. D.

    1985-01-01

    Formal optimization methods and eigenvalue gradient information are used to develop a stabilizing control law for a closed loop linear system that is initially unstable. The method was originally formulated by using direct, constrained optimization methods with the constraints being the real parts of the eigenvalues. However, because of problems in trying to achieve stabilizing control laws, the problem was reformulated to be solved differently. The method described uses the Davidon-Fletcher-Powell minimization technique to solve an indirect, constrained minimization problem in which the performance index is the Kreisselmeier-Steinhauser function of the real parts of all the eigenvalues. The method is applied successfully to solve two different problems: the determination of a fourth-order control law stabilizes a single-input single-output active flutter suppression system and the determination of a second-order control law for a multi-input multi-output lateral-directional flight control system. Various sets of design variables and initial starting points were chosen to show the robustness of the method.

  19. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 2: Analytic manual

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Space Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows subproblems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  20. Power-constrained supercomputing

    NASA Astrophysics Data System (ADS)

    Bailey, Peter E.

    As we approach exascale systems, power is turning from an optimization goal to a critical operating constraint. With power bounds imposed by both stakeholders and the limitations of existing infrastructure, achieving practical exascale computing will therefore rely on optimizing performance subject to a power constraint. However, this requirement should not add to the burden of application developers; optimizing the runtime environment given restricted power will primarily be the job of high-performance system software. In this dissertation, we explore this area and develop new techniques that extract maximum performance subject to a particular power constraint. These techniques include a method to find theoretical optimal performance, a runtime system that shifts power in real time to improve performance, and a node-level prediction model for selecting power-efficient operating points. We use a linear programming (LP) formulation to optimize application schedules under various power constraints, where a schedule consists of a DVFS state and number of OpenMP threads for each section of computation between consecutive message passing events. We also provide a more flexible mixed integer-linear (ILP) formulation and show that the resulting schedules closely match schedules from the LP formulation. Across four applications, we use our LP-derived upper bounds to show that current approaches trail optimal, power-constrained performance by up to 41%. This demonstrates limitations of current systems, and our LP formulation provides future optimization approaches with a quantitative optimization target. We also introduce Conductor, a run-time system that intelligently distributes available power to nodes and cores to improve performance. The key techniques used are configuration space exploration and adaptive power balancing. Configuration exploration dynamically selects the optimal thread concurrency level and DVFS state subject to a hardware-enforced power bound. Adaptive power balancing efficiently predicts where critical paths are likely to occur and distributes power to those paths. Greater power, in turn, allows increased thread concurrency levels, CPU frequency/voltage, or both. We describe these techniques in detail and show that, compared to the state-of-the-art technique of using statically predetermined, per-node power caps, Conductor leads to a best-case performance improvement of up to 30%, and an average improvement of 19.1%. At the node level, an accurate power/performance model will aid in selecting the right configuration from a large set of available configurations. We present a novel approach to generate such a model offline using kernel clustering and multivariate linear regression. Our model requires only two iterations to select a configuration, which provides a significant advantage over exhaustive search-based strategies. We apply our model to predict power and performance for different applications using arbitrary configurations, and show that our model, when used with hardware frequency-limiting in a runtime system, selects configurations with significantly higher performance at a given power limit than those chosen by frequency-limiting alone. When applied to a set of 36 computational kernels from a range of applications, our model accurately predicts power and performance; our runtime system based on the model maintains 91% of optimal performance while meeting power constraints 88% of the time. When the runtime system violates a power constraint, it exceeds the constraint by only 6% in the average case, while simultaneously achieving 54% more performance than an oracle. Through the combination of the above contributions, we hope to provide guidance and inspiration to research practitioners working on runtime systems for power-constrained environments. We also hope this dissertation will draw attention to the need for software and runtime-controlled power management under power constraints at various levels, from the processor level to the cluster level.

  1. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  2. A Composite Algorithm for Mixed Integer Constrained Nonlinear Optimization.

    DTIC Science & Technology

    1980-01-01

    de Silva [141, and Weisman and Wood [76). A particular direct search algorithm, the simplex method, has been cited for having the potential for...spaced discrete points on a line which makes the direction suitable for an efficient integer search technique based on Fibonacci numbers. Two...defined by a subset of variables. The complex algorithm is particularly well suited for this subspace search for two reasons. First, the complex method

  3. Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio

    2011-12-01

    This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).

  4. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  5. Effective Teaching of Economics: A Constrained Optimization Problem?

    ERIC Educational Resources Information Center

    Hultberg, Patrik T.; Calonge, David Santandreu

    2017-01-01

    One of the fundamental tenets of economics is that decisions are often the result of optimization problems subject to resource constraints. Consumers optimize utility, subject to constraints imposed by prices and income. As economics faculty, instructors attempt to maximize student learning while being constrained by their own and students'…

  6. Variable-Metric Algorithm For Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Frick, James D.

    1989-01-01

    Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.

  7. Galaxy Redshifts from Discrete Optimization of Correlation Functions

    NASA Astrophysics Data System (ADS)

    Lee, Benjamin C. G.; Budavári, Tamás; Basu, Amitabh; Rahman, Mubdi

    2016-12-01

    We propose a new method of constraining the redshifts of individual extragalactic sources based on celestial coordinates and their ensemble statistics. Techniques from integer linear programming (ILP) are utilized to optimize simultaneously for the angular two-point cross- and autocorrelation functions. Our novel formalism introduced here not only transforms the otherwise hopelessly expensive, brute-force combinatorial search into a linear system with integer constraints but also is readily implementable in off-the-shelf solvers. We adopt Gurobi, a commercial optimization solver, and use Python to build the cost function dynamically. The preliminary results on simulated data show potential for future applications to sky surveys by complementing and enhancing photometric redshift estimators. Our approach is the first application of ILP to astronomical analysis.

  8. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    PubMed Central

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  9. AI techniques for a space application scheduling problem

    NASA Technical Reports Server (NTRS)

    Thalman, N.; Sparn, T.; Jaffres, L.; Gablehouse, D.; Judd, D.; Russell, C.

    1991-01-01

    Scheduling is a very complex optimization problem which can be categorized as an NP-complete problem. NP-complete problems are quite diverse, as are the algorithms used in searching for an optimal solution. In most cases, the best solutions that can be derived for these combinatorial explosive problems are near-optimal solutions. Due to the complexity of the scheduling problem, artificial intelligence (AI) can aid in solving these types of problems. Some of the factors are examined which make space application scheduling problems difficult and presents a fairly new AI-based technique called tabu search as applied to a real scheduling application. the specific problem is concerned with scheduling application. The specific problem is concerned with scheduling solar and stellar observations for the SOLar-STellar Irradiance Comparison Experiment (SOLSTICE) instrument in a constrained environment which produces minimum impact on the other instruments and maximizes target observation times. The SOLSTICE instrument will gly on-board the Upper Atmosphere Research Satellite (UARS) in 1991, and a similar instrument will fly on the earth observing system (Eos).

  10. Multi-objective approach for energy-aware workflow scheduling in cloud computing environments.

    PubMed

    Yassa, Sonia; Chelouah, Rachid; Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  11. A mathematical tool to generate complex whole body motor tasks and test hypotheses on underlying motor planning.

    PubMed

    Tagliabue, Michele; Pedrocchi, Alessandra; Pozzo, Thierry; Ferrigno, Giancarlo

    2008-01-01

    In spite of the complexity of human motor behavior, difficulties in mathematical modeling have restricted to rather simple movements attempts to identify the motor planning criterion used by the central nervous system. This paper presents a novel-simulation technique able to predict the "desired trajectory" corresponding to a wide range of kinematic and kinetic optimality criteria for tasks involving many degrees of freedom and the coordination between goal achievement and balance maintenance. Employment of proper time discretization, inverse dynamic methods and constrained optimization technique are combined. The application of this simulator to a planar whole body pointing movement shows its effectiveness in managing system nonlinearities and instability as well as in ensuring the anatomo-physiological feasibility of predicted motor plans. In addition, the simulator's capability to simultaneously optimize competing movement aspects represents an interesting opportunity for the motor control community, in which the coexistence of several controlled variables has been hypothesized.

  12. Artificial Intelligence based technique for BTS placement

    NASA Astrophysics Data System (ADS)

    Alenoghena, C. O.; Emagbetere, J. O.; Aibinu, A. M.

    2013-12-01

    The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out.

  13. Constrained Multiobjective Biogeography Optimization Algorithm

    PubMed Central

    Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping

    2014-01-01

    Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591

  14. Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal

    ERIC Educational Resources Information Center

    Steinley, Douglas; Hubert, Lawrence

    2008-01-01

    This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…

  15. Interplanetary Program to Optimize Simulated Trajectories (IPOST). Volume 1: User's guide

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D.; Olson, D. W.; Vallado, C. A.

    1992-01-01

    IPOST is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence fo trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the coat function. Targeting and optimization is performed using the Stanford NPSOL algorithm. IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  16. Policy Iteration for $H_\\infty $ Optimal Control of Polynomial Nonlinear Systems via Sum of Squares Programming.

    PubMed

    Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao

    2018-02-01

    Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.

  17. Optimizing an Actuator Array for the Control of Multi-Frequency Noise in Aircraft Interiors

    NASA Technical Reports Server (NTRS)

    Palumbo, D. L.; Padula, S. L.

    1997-01-01

    Techniques developed for selecting an optimized actuator array for interior noise reduction at a single frequency are extended to the multi-frequency case. Transfer functions for 64 actuators were obtained at 5 frequencies from ground testing the rear section of a fully trimmed DC-9 fuselage. A single loudspeaker facing the left side of the aircraft was the primary source. A combinatorial search procedure (tabu search) was employed to find optimum actuator subsets of from 2 to 16 actuators. Noise reduction predictions derived from the transfer functions were used as a basis for evaluating actuator subsets during optimization. Results indicate that it is necessary to constrain actuator forces during optimization. Unconstrained optimizations selected actuators which require unrealistically large forces. Two methods of constraint are evaluated. It is shown that a fast, but approximate, method yields results equivalent to an accurate, but computationally expensive, method.

  18. Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.

    PubMed

    McIntosh, Chris; Hamarneh, Ghassan

    2012-01-01

    We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.

  19. Sliding Mode Control of a Slewing Flexible Beam

    NASA Technical Reports Server (NTRS)

    Wilson, David G.; Parker, Gordon G.; Starr, Gregory P.; Robinett, Rush D., III

    1997-01-01

    An output feedback sliding mode controller (SMC) is proposed to minimize the effects of vibrations of slewing flexible manipulators. A spline trajectory is used to generate ideal position and velocity commands. Constrained nonlinear optimization techniques are used to both calibrate nonlinear models and determine optimized gains to produce a rest-to-rest, residual vibration-free maneuver. Vibration-free maneuvers are important for current and future NASA space missions. This study required the development of the nonlinear dynamic system equations of motion; robust control law design; numerical implementation; system identification; and verification using the Sandia National Laboratories flexible robot testbed. Results are shown for a slewing flexible beam.

  20. Constrained growth flips the direction of optimal phenological responses among annual plants.

    PubMed

    Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas

    2016-03-01

    Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  1. Constrained optimal multi-phase lunar landing trajectory with minimum fuel consumption

    NASA Astrophysics Data System (ADS)

    Mathavaraj, S.; Pandiyan, R.; Padhi, R.

    2017-12-01

    A Legendre pseudo spectral philosophy based multi-phase constrained fuel-optimal trajectory design approach is presented in this paper. The objective here is to find an optimal approach to successfully guide a lunar lander from perilune (18km altitude) of a transfer orbit to a height of 100m over a specific landing site. After attaining 100m altitude, there is a mission critical re-targeting phase, which has very different objective (but is not critical for fuel optimization) and hence is not considered in this paper. The proposed approach takes into account various mission constraints in different phases from perilune to the landing site. These constraints include phase-1 ('braking with rough navigation') from 18km altitude to 7km altitude where navigation accuracy is poor, phase-2 ('attitude hold') to hold the lander attitude for 35sec for vision camera processing for obtaining navigation error, and phase-3 ('braking with precise navigation') from end of phase-2 to 100m altitude over the landing site, where navigation accuracy is good (due to vision camera navigation inputs). At the end of phase-1, there are constraints on position and attitude. In Phase-2, the attitude must be held throughout. At the end of phase-3, the constraints include accuracy in position, velocity as well as attitude orientation. The proposed optimal trajectory technique satisfies the mission constraints in each phase and provides an overall fuel-minimizing guidance command history.

  2. Multiobjective genetic algorithm conjunctive use optimization for production, cost, and energy with dynamic return flow

    NASA Astrophysics Data System (ADS)

    Peralta, Richard C.; Forghani, Ali; Fayad, Hala

    2014-04-01

    Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.

  3. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  4. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method

    NASA Astrophysics Data System (ADS)

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability.

  5. COPS: Large-scale nonlinearly constrained optimization problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bondarenko, A.S.; Bortz, D.M.; More, J.J.

    2000-02-10

    The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.

  6. The aerodynamic design of an advanced rotor airfoil

    NASA Technical Reports Server (NTRS)

    Blackwell, J. A., Jr.; Hinson, B. L.

    1978-01-01

    An advanced rotor airfoil, designed utilizing supercritical airfoil technology and advanced design and analysis methodology is described. The airfoil was designed subject to stringent aerodynamic design criteria for improving the performance over the entire rotor operating regime. The design criteria are discussed. The design was accomplished using a physical plane, viscous, transonic inverse design procedure, and a constrained function minimization technique for optimizing the airfoil leading edge shape. The aerodynamic performance objectives of the airfoil are discussed.

  7. Application of decomposition techniques to the preliminary design of a transport aircraft

    NASA Technical Reports Server (NTRS)

    Rogan, J. E.; Mcelveen, R. P.; Kolb, M. A.

    1986-01-01

    A multifaceted decomposition of a nonlinear constrained optimization problem describing the preliminary design process for a transport aircraft has been made. Flight dynamics, flexible aircraft loads and deformations, and preliminary structural design subproblems appear prominently in the decomposition. The use of design process decomposition for scheduling design projects, a new system integration approach to configuration control, and the application of object-centered programming to a new generation of design tools are discussed.

  8. Interplanetary program to optimize simulated trajectories (IPOST). Volume 4: Sample cases

    NASA Technical Reports Server (NTRS)

    Hong, P. E.; Kent, P. D; Olson, D. W.; Vallado, C. A.

    1992-01-01

    The Interplanetary Program to Optimize Simulated Trajectories (IPOST) is intended to support many analysis phases, from early interplanetary feasibility studies through spacecraft development and operations. The IPOST output provides information for sizing and understanding mission impacts related to propulsion, guidance, communications, sensor/actuators, payload, and other dynamic and geometric environments. IPOST models three degree of freedom trajectory events, such as launch/ascent, orbital coast, propulsive maneuvering (impulsive and finite burn), gravity assist, and atmospheric entry. Trajectory propagation is performed using a choice of Cowell, Encke, Multiconic, Onestep, or Conic methods. The user identifies a desired sequence of trajectory events, and selects which parameters are independent (controls) and dependent (targets), as well as other constraints and the cost function. Targeting and optimization are performed using the Standard NPSOL algorithm. The IPOST structure allows sub-problems within a master optimization problem to aid in the general constrained parameter optimization solution. An alternate optimization method uses implicit simulation and collocation techniques.

  9. Kinematics and constraints associated with swashplate blade pitch control

    NASA Technical Reports Server (NTRS)

    Leyland, Jane A.

    1993-01-01

    An important class of techniques to reduce helicopter vibration is based on using a Higher Harmonic controller to optimally define the Higher Harmonic blade pitch. These techniques typically require solution of a general optimization problem requiring the determination of a control vector which minimizes a performance index where functions of the control vector are subject to inequality constraints. Six possible constraint functions associated with swashplate blade pitch control were identified and defined. These functions constrain: (1) blade pitch Fourier Coefficients expressed in the Rotating System, (2) blade pitch Fourier Coefficients expressed in the Nonrotating System, (3) stroke of the individual actuators expressed in the Nonrotating System, (4) blade pitch expressed as a function of blade azimuth and actuator stroke, (5) time rate-of-change of the aforementioned parameters, and (6) required actuator power. The aforementioned constraints and the associated kinematics of swashplate blade pitch control by means of the strokes of the individual actuators are documented.

  10. Digital robust active control law synthesis for large order systems using constrained optimization

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, Vivek

    1987-01-01

    This paper presents a direct digital control law synthesis procedure for a large order, sampled data, linear feedback system using constrained optimization techniques to meet multiple design requirements. A linear quadratic Gaussian type cost function is minimized while satisfying a set of constraints on the design loads and responses. General expressions for gradients of the cost function and constraints, with respect to the digital control law design variables are derived analytically and computed by solving a set of discrete Liapunov equations. The designer can choose the structure of the control law and the design variables, hence a stable classical control law as well as an estimator-based full or reduced order control law can be used as an initial starting point. Selected design responses can be treated as constraints instead of lumping them into the cost function. This feature can be used to modify a control law, to meet individual root mean square response limitations as well as minimum single value restrictions. Low order, robust digital control laws were synthesized for gust load alleviation of a flexible remotely piloted drone aircraft.

  11. Optimal focal-plane restoration

    NASA Technical Reports Server (NTRS)

    Reichenbach, Stephen E.; Park, Stephen K.

    1989-01-01

    Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.

  12. Mixed Integer Programming and Heuristic Scheduling for Space Communication

    NASA Technical Reports Server (NTRS)

    Lee, Charles H.; Cheung, Kar-Ming

    2013-01-01

    Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.

  13. The Image-Optimized Corona; Progress on Using Coronagraph Images to Constrain Coronal Magnetic Field Models

    NASA Astrophysics Data System (ADS)

    Jones, S. I.; Uritsky, V. M.; Davila, J. M.

    2017-12-01

    In absence of reliable coronal magnetic field measurements, solar physicists have worked for several decades to develop techniques for extrapolating photospheric magnetic field measurements into the solar corona and/or heliosphere. The products of these efforts tend to be very sensitive to variation in the photospheric measurements, such that the uncertainty in the photospheric measurements introduces significant uncertainty into the coronal and heliospheric models needed to predict such things as solar wind speed, IMF polarity at Earth, and CME propagation. Ultimately, the reason for the sensitivity of the model to the boundary conditions is that the model is trying to extact a great deal of information from a relatively small amout of data. We have published in recent years about a new method we are developing to use morphological information gleaned from coronagraph images to constrain models of the global coronal magnetic field. In our approach, we treat the photospheric measurements as approximations and use an optimization algorithm to iteratively find a global coronal model that best matches both the photospheric measurements and quasi-linear features observed in polarization brightness coronagraph images. Here we will summarize the approach we have developed and present recent progress in optimizing PFSS models based on GONG magnetograms and MLSO K-Cor images.

  14. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  15. Real-Time Optimization for use in a Control Allocation System to Recover from Pilot Induced Oscillations

    NASA Technical Reports Server (NTRS)

    Leonard, Michael W.

    2013-01-01

    Integration of the Control Allocation technique to recover from Pilot Induced Oscillations (CAPIO) System into the control system of a Short Takeoff and Landing Mobility Concept Vehicle simulation presents a challenge because the CAPIO formulation requires that constrained optimization problems be solved at the controller operating frequency. We present a solution that utilizes a modified version of the well-known L-BFGS-B solver. Despite the iterative nature of the solver, the method is seen to converge in real time with sufficient reliability to support three weeks of piloted runs at the NASA Ames Vertical Motion Simulator (VMS) facility. The results of the optimization are seen to be excellent in the vast majority of real-time frames. Deficiencies in the quality of the results in some frames are shown to be improvable with simple termination criteria adjustments, though more real-time optimization iterations would be required.

  16. Configuration-shape-size optimization of space structures by material redistribution

    NASA Technical Reports Server (NTRS)

    Vandenbelt, D. N.; Crivelli, L. A.; Felippa, C. A.

    1993-01-01

    This project investigates the configuration-shape-size optimization (CSSO) of orbiting and planetary space structures. The project embodies three phases. In the first one the material-removal CSSO method introduced by Kikuchi and Bendsoe (KB) is further developed to gain understanding of finite element homogenization techniques as well as associated constrained optimization algorithms that must carry along a very large number (thousands) of design variables. In the CSSO-KB method an optimal structure is 'carved out' of a design domain initially filled with finite elements, by allowing perforations (microholes) to develop, grow and merge. The second phase involves 'materialization' of space structures from the void, thus reversing the carving process. The third phase involves analysis of these structures for construction and operational constraints, with emphasis in packaging and deployment. The present paper describes progress in selected areas of the first project phase and the start of the second one.

  17. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  18. Particle swarm optimization with recombination and dynamic linkage discovery.

    PubMed

    Chen, Ying-Ping; Peng, Wen-Chih; Jian, Ming-Chung

    2007-12-01

    In this paper, we try to improve the performance of the particle swarm optimizer by incorporating the linkage concept, which is an essential mechanism in genetic algorithms, and design a new linkage identification technique called dynamic linkage discovery to address the linkage problem in real-parameter optimization problems. Dynamic linkage discovery is a costless and effective linkage recognition technique that adapts the linkage configuration by employing only the selection operator without extra judging criteria irrelevant to the objective function. Moreover, a recombination operator that utilizes the discovered linkage configuration to promote the cooperation of particle swarm optimizer and dynamic linkage discovery is accordingly developed. By integrating the particle swarm optimizer, dynamic linkage discovery, and recombination operator, we propose a new hybridization of optimization methodologies called particle swarm optimization with recombination and dynamic linkage discovery (PSO-RDL). In order to study the capability of PSO-RDL, numerical experiments were conducted on a set of benchmark functions as well as on an important real-world application. The benchmark functions used in this paper were proposed in the 2005 Institute of Electrical and Electronics Engineers Congress on Evolutionary Computation. The experimental results on the benchmark functions indicate that PSO-RDL can provide a level of performance comparable to that given by other advanced optimization techniques. In addition to the benchmark, PSO-RDL was also used to solve the economic dispatch (ED) problem for power systems, which is a real-world problem and highly constrained. The results indicate that PSO-RDL can successfully solve the ED problem for the three-unit power system and obtain the currently known best solution for the 40-unit system.

  19. Experiences at Langley Research Center in the application of optimization techniques to helicopter airframes for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta; Kvaternik, Raymond G.

    1991-01-01

    A NASA/industry rotorcraft structural dynamics program known as Design Analysis Methods for VIBrationS (DAMVIBS) was initiated at Langley Research Center in 1984 with the objective of establishing the technology base needed by the industry for developing an advanced finite-element-based vibrations design analysis capability for airframe structures. As a part of the in-house activities contributing to that program, a study was undertaken to investigate the use of formal, nonlinear programming-based, numerical optimization techniques for airframe vibrations design work. Considerable progress has been made in connection with that study since its inception in 1985. This paper presents a unified summary of the experiences and results of that study. The formulation and solution of airframe optimization problems are discussed. Particular attention is given to describing the implementation of a new computational procedure based on MSC/NASTRAN and CONstrained function MINimization (CONMIN) in a computer program system called DYNOPT for the optimization of airframes subject to strength, frequency, dynamic response, and fatigue constraints. The results from the application of the DYNOPT program to the Bell AH-1G helicopter are presented and discussed.

  20. Conservative strategy-based ensemble surrogate model for optimal groundwater remediation design at DNAPLs-contaminated sites

    NASA Astrophysics Data System (ADS)

    Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo

    2017-08-01

    The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.

  1. Dipole and quadrupole synthesis of electric potential fields. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Tilley, D. G.

    1979-01-01

    A general technique for expanding an unknown potential field in terms of a linear summation of weighted dipole or quadrupole fields is described. Computational methods were developed for the iterative addition of dipole fields. Various solution potentials were compared inside the boundary with a more precise calculation of the potential to derive optimal schemes for locating the singularities of the dipole fields. Then, the problem of determining solutions to Laplace's equation on an unbounded domain as constrained by pertinent electron trajectory data was considered.

  2. Homotopy approach to optimal, linear quadratic, fixed architecture compensation

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1991-01-01

    Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.

  3. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  4. Nonlinear programming extensions to rational function approximation methods for unsteady aerodynamic forces

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1988-01-01

    The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.

  5. Optimal apodization design for medical ultrasound using constrained least squares part I: theory.

    PubMed

    Guenther, Drake A; Walker, William F

    2007-02-01

    Aperture weighting functions are critical design parameters in the development of ultrasound systems because beam characteristics affect the contrast and point resolution of the final output image. In previous work by our group, we developed a metric that quantifies a broadband imaging system's contrast resolution performance. We now use this metric to formulate a novel general ultrasound beamformer design method. In our algorithm, we use constrained least squares (CLS) techniques and a linear algebra formulation to describe the system point spread function (PSF) as a function of the aperture weightings. In one approach, we minimize the energy of the PSF outside a certain boundary and impose a linear constraint on the aperture weights. In a second approach, we minimize the energy of the PSF outside a certain boundary while imposing a quadratic constraint on the energy of the PSF inside the boundary. We present detailed analysis for an arbitrary ultrasound imaging system and discuss several possible applications of the CLS techniques, such as designing aperture weightings to maximize contrast resolution and improve the system depth of field.

  6. Thermodynamic optimization of mixed refrigerant Joule- Thomson systems constrained by heat transfer considerations

    NASA Astrophysics Data System (ADS)

    Hinze, J. F.; Klein, S. A.; Nellis, G. F.

    2015-12-01

    Mixed refrigerant (MR) working fluids can significantly increase the cooling capacity of a Joule-Thomson (JT) cycle. The optimization of MRJT systems has been the subject of substantial research. However, most optimization techniques do not model the recuperator in sufficient detail. For example, the recuperator is usually assumed to have a heat transfer coefficient that does not vary with the mixture. Ongoing work at the University of Wisconsin-Madison has shown that the heat transfer coefficients for two-phase flow are approximately three times greater than for a single phase mixture when the mixture quality is between 15% and 85%. As a result, a system that optimizes a MR without also requiring that the flow be in this quality range may require an extremely large recuperator or not achieve the performance predicted by the model. To ensure optimal performance of the JT cycle, the MR should be selected such that it is entirely two-phase within the recuperator. To determine the optimal MR composition, a parametric study was conducted assuming a thermodynamically ideal cycle. The results of the parametric study are graphically presented on a contour plot in the parameter space consisting of the extremes of the qualities that exist within the recuperator. The contours show constant values of the normalized refrigeration power. This ‘map’ shows the effect of MR composition on the cycle performance and it can be used to select the MR that provides a high cooling load while also constraining the recuperator to be two phase. The predicted best MR composition can be used as a starting point for experimentally determining the best MR.

  7. Chance-constrained multi-objective optimization of groundwater remediation design at DNAPLs-contaminated sites using a multi-algorithm genetically adaptive method.

    PubMed

    Ouyang, Qi; Lu, Wenxi; Hou, Zeyu; Zhang, Yu; Li, Shuai; Luo, Jiannan

    2017-05-01

    In this paper, a multi-algorithm genetically adaptive multi-objective (AMALGAM) method is proposed as a multi-objective optimization solver. It was implemented in the multi-objective optimization of a groundwater remediation design at sites contaminated by dense non-aqueous phase liquids. In this study, there were two objectives: minimization of the total remediation cost, and minimization of the remediation time. A non-dominated sorting genetic algorithm II (NSGA-II) was adopted to compare with the proposed method. For efficiency, the time-consuming surfactant-enhanced aquifer remediation simulation model was replaced by a surrogate model constructed by a multi-gene genetic programming (MGGP) technique. Similarly, two other surrogate modeling methods-support vector regression (SVR) and Kriging (KRG)-were employed to make comparisons with MGGP. In addition, the surrogate-modeling uncertainty was incorporated in the optimization model by chance-constrained programming (CCP). The results showed that, for the problem considered in this study, (1) the solutions obtained by AMALGAM incurred less remediation cost and required less time than those of NSGA-II, indicating that AMALGAM outperformed NSGA-II. It was additionally shown that (2) the MGGP surrogate model was more accurate than SVR and KRG; and (3) the remediation cost and time increased with the confidence level, which can enable decision makers to make a suitable choice by considering the given budget, remediation time, and reliability. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Improvement of the cruise performances of a wing by means of aerodynamic optimization. Validation with a Far-Field method

    NASA Astrophysics Data System (ADS)

    Jiménez-Varona, J.; Ponsin Roca, J.

    2015-06-01

    Under a contract with AIRBUS MILITARY (AI-M), an exercise to analyze the potential of optimization techniques to improve the wing performances at cruise conditions has been carried out by using an in-house design code. The original wing was provided by AI-M and several constraints were posed for the redesign. To maximize the aerodynamic efficiency at cruise, optimizations were performed using the design techniques developed internally at INTA under a research program (Programa de Termofluidodinámica). The code is a gradient-based optimizaa tion code, which uses classical finite differences approach for gradient computations. Several techniques for search direction computation are implemented for unconstrained and constrained problems. Techniques for geometry modifications are based on different approaches which include perturbation functions for the thickness and/or mean line distributions and others by Bézier curves fitting of certain degree. It is very e important to afford a real design which involves several constraints that reduce significantly the feasible design space. And the assessment of the code is needed in order to check the capabilities and the possible drawbacks. Lessons learnt will help in the development of future enhancements. In addition, the validation of the results was done using also the well-known TAU flow solver and a far-field drag method in order to determine accurately the improvement in terms of drag counts.

  9. Wind Farm Turbine Type and Placement Optimization

    NASA Astrophysics Data System (ADS)

    Graf, Peter; Dykes, Katherine; Scott, George; Fields, Jason; Lunacek, Monte; Quick, Julian; Rethore, Pierre-Elouan

    2016-09-01

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. This document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  10. Wind farm turbine type and placement optimization

    DOE PAGES

    Graf, Peter; Dykes, Katherine; Scott, George; ...

    2016-10-03

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  11. Use of constrained optimization in the conceptual design of a medium-range subsonic transport

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1980-01-01

    Constrained parameter optimization was used to perform the optimal conceptual design of a medium range transport configuration. The impact of choosing a given performance index was studied, and the required income for a 15 percent return on investment was proposed as a figure of merit. A number of design constants and constraint functions were systematically varied to document the sensitivities of the optimal design to a variety of economic and technological assumptions. A comparison was made for each of the parameter variations between the baseline configuration and the optimally redesigned configuration.

  12. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    NASA Astrophysics Data System (ADS)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value of the flight path angle. A summary of performance results for all these guidance laws is presented in the fifth part of this thesis along with recommendations for further research.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter; Dykes, Katherine; Scott, George

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  14. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  15. Pareto-optimal estimates that constrain mean California precipitation change

    NASA Astrophysics Data System (ADS)

    Langenbrunner, B.; Neelin, J. D.

    2017-12-01

    Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.

  16. Integrated Control Using the SOFFT Control Structure

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1996-01-01

    The need for integrated/constrained control systems has become clearer as advanced aircraft introduced new coupled subsystems such as new propulsion subsystems with thrust vectoring and new aerodynamic designs. In this study, we develop an integrated control design methodology which accomodates constraints among subsystem variables while using the Stochastic Optimal Feedforward/Feedback Control Technique (SOFFT) thus maintaining all the advantages of the SOFFT approach. The Integrated SOFFT Control methodology uses a centralized feedforward control and a constrained feedback control law. The control thus takes advantage of the known coupling among the subsystems while maintaining the identity of subsystems for validation purposes and the simplicity of the feedback law to understand the system response in complicated nonlinear scenarios. The Variable-Gain Output Feedback Control methodology (including constant gain output feedback) is extended to accommodate equality constraints. A gain computation algorithm is developed. The designer can set the cross-gains between two variables or subsystems to zero or another value and optimize the remaining gains subject to the constraint. An integrated control law is designed for a modified F-15 SMTD aircraft model with coupled airframe and propulsion subsystems using the Integrated SOFFT Control methodology to produce a set of desired flying qualities.

  17. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  18. Constraining anomalous Higgs boson couplings to the heavy-flavor fermions using matrix element techniques

    NASA Astrophysics Data System (ADS)

    Gritsan, Andrei V.; Röntsch, Raoul; Schulze, Markus; Xiao, Meng

    2016-09-01

    In this paper, we investigate anomalous interactions of the Higgs boson with heavy fermions, employing shapes of kinematic distributions. We study the processes p p →t t ¯+H , b b ¯+H , t q +H , and p p →H →τ+τ- and present applications of event generation, reweighting techniques for fast simulation of anomalous couplings, as well as matrix element techniques for optimal sensitivity. We extend the matrix element likelihood approach (MELA) technique, which proved to be a powerful matrix element tool for Higgs boson discovery and characterization during Run I of the LHC, and implement all analysis tools in the JHU generator framework. A next-to-leading-order QCD description of the p p →t t ¯+H process allows us to investigate the performance of the MELA in the presence of extra radiation. Finally, projections for LHC measurements through the end of Run III are presented.

  19. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  20. On advanced configuration enhance adaptive system optimization

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Wang, Helong; Guo, Chunjie; Chen, Hongliang; Zhou, Liwei

    2017-10-01

    For aim to find an effective method to structure to enhance these adaptive system with some complex function and look forward to establish an universally applicable solution in prototype and optimization. As the most attractive component in adaptive system, wave front corrector is constrained by some conventional technique and components, such as polarization dependence and narrow working waveband. Advanced configuration based on a polarized beam split can optimized energy splitting method used to overcome these problems effective. With the global algorithm, the bandwidth has been amplified by more than five times as compared with that of traditional ones. Simulation results show that the system can meet the application requirements in MTF and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration, Results show their effectiveness.

  1. Market penetration of energy supply technologies

    NASA Astrophysics Data System (ADS)

    Condap, R. J.

    1980-03-01

    Techniques to incorporate the concepts of profit-induced growth and risk aversion into policy-oriented optimization models of the domestic energy sector are examined. After reviewing the pertinent market penetration literature, simple mathematical programs in which the introduction of new energy technologies is constrained primarily by the reinvestment of profits are formulated. The main results involve the convergence behavior of technology production levels under various assumptions about the form of the energy demand function. Next, profitability growth constraints are embedded in a full-scale model of U.S. energy-economy interactions. A rapidly convergent algorithm is developed to utilize optimal shadow prices in the computation of profitability for individual technologies. Allowance is made for additional policy variables such as government funding and taxation. The result is an optimal deployment schedule for current and future energy technologies which is consistent with the sector's ability to finance capacity expansion.

  2. Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel

    2011-09-01

    The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.

  3. Application of augmented-Lagrangian methods in meteorology: Comparison of different conjugate-gradient codes for large-scale minimization

    NASA Technical Reports Server (NTRS)

    Navon, I. M.

    1984-01-01

    A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.

  4. Modeling of Density-Dependent Flow based on the Thermodynamically Constrained Averaging Theory

    NASA Astrophysics Data System (ADS)

    Weigand, T. M.; Schultz, P. B.; Kelley, C. T.; Miller, C. T.; Gray, W. G.

    2016-12-01

    The thermodynamically constrained averaging theory (TCAT) has been used to formulate general classes of porous medium models, including new models for density-dependent flow. The TCAT approach provides advantages that include a firm connection between the microscale, or pore scale, and the macroscale; a thermodynamically consistent basis; explicit inclusion of factors such as a diffusion that arises from gradients associated with pressure and activity and the ability to describe both high and low concentration displacement. The TCAT model is presented and closure relations for the TCAT model are postulated based on microscale averages and a parameter estimation is performed on a subset of the experimental data. Due to the sharpness of the fronts, an adaptive moving mesh technique was used to ensure grid independent solutions within the run time constraints. The optimized parameters are then used for forward simulations and compared to the set of experimental data not used for the parameter estimation.

  5. Development of a Split Bitter-type Magnet System for Dusty Plasma Experiments

    NASA Astrophysics Data System (ADS)

    Bates, Evan; Romero-Talamas, Carlos A.; Birmingham, William J.; Rivera, William F.

    2014-10-01

    A 10 Tesla Bitter-type magnetic system is under development at the Dusty Plasma Laboratory of the University of Maryland, Baltimore County (UMBC). We present here an optimization technique that uses differential evolution to minimize the omhic heating produced by the coils, while constraining the magnetic field in the experimental volume. The code gives us the optimal dimensions for the coil system including: coil length, turn thickness, disks radii, resistance, and total current required for a constant magnetic field. Finite element parametric optimization is then used to establish the optimal design for water cooling holes. Placement of the cooling holes will also take into consideration the magnetic forces acting on the copper alloy disks to ensure the material strength is not compromised during operation. The proposed power and cooling water delivery subsystems for the coils are also presented. Upon completion and testing of the magnet system, planned experiments include the propagation of magnetized waves in dusty plasma crystals under various boundary conditions, and viscosity in rotational shear flow, among others.

  6. Design and optimization of color lookup tables on a simplex topology.

    PubMed

    Monga, Vishal; Bala, Raja; Mo, Xuan

    2012-04-01

    An important computational problem in color imaging is the design of color transforms that map color between devices or from a device-dependent space (e.g., RGB/CMYK) to a device-independent space (e.g., CIELAB) and vice versa. Real-time processing constraints entail that such nonlinear color transforms be implemented using multidimensional lookup tables (LUTs). Furthermore, relatively sparse LUTs (with efficient interpolation) are employed in practice because of storage and memory constraints. This paper presents a principled design methodology rooted in constrained convex optimization to design color LUTs on a simplex topology. The use of n simplexes, i.e., simplexes in n dimensions, as opposed to traditional lattices, recently has been of great interest in color LUT design for simplex topologies that allow both more analytically tractable formulations and greater efficiency in the LUT. In this framework of n-simplex interpolation, our central contribution is to develop an elegant iterative algorithm that jointly optimizes the placement of nodes of the color LUT and the output values at those nodes to minimize interpolation error in an expected sense. This is in contrast to existing work, which exclusively designs either node locations or the output values. We also develop new analytical results for the problem of node location optimization, which reduces to constrained optimization of a large but sparse interpolation matrix in our framework. We evaluate our n -simplex color LUTs against the state-of-the-art lattice (e.g., International Color Consortium profiles) and simplex-based techniques for approximating two representative multidimensional color transforms that characterize a CMYK xerographic printer and an RGB scanner, respectively. The results show that color LUTs designed on simplexes offer very significant benefits over traditional lattice-based alternatives in improving color transform accuracy even with a much smaller number of nodes.

  7. Joint Optimization of Vertical Component Gravity and Seismic P-wave First Arrivals by Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.

    2015-12-01

    Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could replace the existing strategy of forward modeling to match gravity data.

  8. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  9. Optimal synchronization in space

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-02-01

    In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.

  10. A feasible DY conjugate gradient method for linear equality constraints

    NASA Astrophysics Data System (ADS)

    LI, Can

    2017-09-01

    In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.

  11. A shifted hyperbolic augmented Lagrangian-based artificial fish two-swarm algorithm with guaranteed convergence for constrained global optimization

    NASA Astrophysics Data System (ADS)

    Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.

    2016-12-01

    This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.

  12. On meeting capital requirements with a chance-constrained optimization model.

    PubMed

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  13. Robust Constrained Optimization Approach to Control Design for International Space Station Centrifuge Rotor Auto Balancing Control System

    NASA Technical Reports Server (NTRS)

    Postma, Barry Dirk

    2005-01-01

    This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.

  14. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  15. A Multiuser Manufacturing Resource Service Composition Method Based on the Bees Algorithm

    PubMed Central

    Xie, Yongquan; Zhou, Zude; Pham, Duc Truong; Xu, Wenjun; Ji, Chunqian

    2015-01-01

    In order to realize an optimal resource service allocation in current open and service-oriented manufacturing model, multiuser resource service composition (RSC) is modeled as a combinational and constrained multiobjective problem. The model takes into account both subjective and objective quality of service (QoS) properties as representatives to evaluate a solution. The QoS properties aggregation and evaluation techniques are based on existing researches. The basic Bees Algorithm is tailored for finding a near optimal solution to the model, since the basic version is only proposed to find a desired solution in continuous domain and thus not suitable for solving the problem modeled in our study. Particular rules are designed for handling the constraints and finding Pareto optimality. In addition, the established model introduces a trusted service set to each user so that the algorithm could start by searching in the neighbor of more reliable service chains (known as seeds) than those randomly generated. The advantages of these techniques are validated by experiments in terms of success rate, searching speed, ability of avoiding ingenuity, and so forth. The results demonstrate the effectiveness of the proposed method in handling multiuser RSC problems. PMID:26339232

  16. Modern Optimization Methods in Minimum Weight Design of Elastic Annular Rotating Disk with Variable Thickness

    NASA Astrophysics Data System (ADS)

    Jafari, S.; Hojjati, M. H.

    2011-12-01

    Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk thickness profile for minimum weight design using the simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. In using semi-analytical the radial domain of the disk is divided into some virtual sub-domains as rings where the weight of each rings must be minimized. Inequality constrain equation used in optimization is to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk and rotating disk does not fail. The results show that the minimum weight obtained for all two methods is almost identical. The PSO method gives a profile with slightly less weight (6.9% less than SA) while the implementation of both PSO and SA methods are easy and provide more flexibility compared with classical methods.

  17. Critical transition in the constrained traveling salesman problem.

    PubMed

    Andrecut, M; Ali, M K

    2001-04-01

    We investigate the finite size scaling of the mean optimal tour length as a function of density of obstacles in a constrained variant of the traveling salesman problem (TSP). The computational experience pointed out a critical transition (at rho(c) approximately 85%) in the dependence between the excess of the mean optimal tour length over the Held-Karp lower bound and the density of obstacles.

  18. Robust media processing on programmable power-constrained systems

    NASA Astrophysics Data System (ADS)

    McVeigh, Jeff

    2005-03-01

    To achieve consumer-level quality, media systems must process continuous streams of audio and video data while maintaining exacting tolerances on sampling rate, jitter, synchronization, and latency. While it is relatively straightforward to design fixed-function hardware implementations to satisfy worst-case conditions, there is a growing trend to utilize programmable multi-tasking solutions for media applications. The flexibility of these systems enables support for multiple current and future media formats, which can reduce design costs and time-to-market. This paper provides practical engineering solutions to achieve robust media processing on such systems, with specific attention given to power-constrained platforms. The techniques covered in this article utilize the fundamental concepts of algorithm and software optimization, software/hardware partitioning, stream buffering, hierarchical prioritization, and system resource and power management. A novel enhancement to dynamically adjust processor voltage and frequency based on buffer fullness to reduce system power consumption is examined in detail. The application of these techniques is provided in a case study of a portable video player implementation based on a general-purpose processor running a non real-time operating system that achieves robust playback of synchronized H.264 video and MP3 audio from local storage and streaming over 802.11.

  19. Design of coupled mace filters for optical pattern recognition using practical spatial light modulators

    NASA Technical Reports Server (NTRS)

    Rajan, P. K.; Khan, Ajmal

    1993-01-01

    Spatial light modulators (SLMs) are being used in correlation-based optical pattern recognition systems to implement the Fourier domain filters. Currently available SLMs have certain limitations with respect to the realizability of these filters. Therefore, it is necessary to incorporate the SLM constraints in the design of the filters. The design of a SLM-constrained minimum average correlation energy (SLM-MACE) filter using the simulated annealing-based optimization technique was investigated. The SLM-MACE filter was synthesized for three different types of constraints. The performance of the filter was evaluated in terms of its recognition (discrimination) capabilities using computer simulations. The correlation plane characteristics of the SLM-MACE filter were found to be reasonably good. The SLM-MACE filter yielded far better results than the analytical MACE filter implemented on practical SLMs using the constrained magnitude technique. Further, the filter performance was evaluated in the presence of noise in the input test images. This work demonstrated the need to include the SLM constraints in the filter design. Finally, a method is suggested to reduce the computation time required for the synthesis of the SLM-MACE filter.

  20. SU-F-T-380: Comparing the Effect of Respiration On Dose Distribution Between Conventional Tangent Pair and IMRT Techniques for Adjuvant Radiotherapy in Early Stage Breast Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, M; Ramaseshan, R

    2016-06-15

    Purpose: In this project, we compared the conventional tangent pair technique to IMRT technique by analyzing the dose distribution. We also investigated the effect of respiration on planning target volume (PTV) dose coverage in both techniques. Methods: In order to implement IMRT technique a template based planning protocol, dose constrains and treatment process was developed. Two open fields with optimized field weights were combined with two beamlet optimization fields in IMRT plans. We compared the dose distribution between standard tangential pair and IMRT. The improvement in dose distribution was measured by parameters such as conformity index, homogeneity index and coveragemore » index. Another end point was the IMRT technique will reduce the planning time for staff. The effect of patient’s respiration on dose distribution was also estimated. The four dimensional computed tomography (4DCT) for different phase of breathing cycle was used to evaluate the effect of respiration on IMRT planned dose distribution. Results: We have accumulated 10 patients that acquired 4DCT and planned by both techniques. Based on the preliminary analysis, the dose distribution in IMRT technique was better than conventional tangent pair technique. Furthermore, the effect of respiration in IMRT plan was not significant as evident from the 95% isodose line coverage of PTV drawn on all phases of 4DCT. Conclusion: Based on the 4DCT images, the breathing effect on dose distribution was smaller than what we expected. We suspect that there are two reasons. First, the PTV movement due to respiration was not significant. It might be because we used a tilted breast board to setup patients. Second, the open fields with optimized field weights in IMRT technique might reduce the breathing effect on dose distribution. A further investigation is necessary.« less

  1. String-averaging incremental subgradients for constrained convex optimization with applications to reconstruction of tomographic images

    NASA Astrophysics Data System (ADS)

    Massambone de Oliveira, Rafael; Salomão Helou, Elias; Fontoura Costa, Eduardo

    2016-11-01

    We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.

  2. Photon-efficient super-resolution laser radar

    NASA Astrophysics Data System (ADS)

    Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.

    2017-08-01

    The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.

  3. Inverse design of near unity efficiency perfectly vertical grating couplers

    NASA Astrophysics Data System (ADS)

    Michaels, Andrew; Yablonovitch, Eli

    2018-02-01

    Efficient coupling between integrated optical waveguides and optical fibers is essential to the success of integrated photonics. While many solutions exist, perfectly vertical grating couplers which scatter light out of a waveguide in the direction normal to the waveguide's top surface are an ideal candidate due to their potential to reduce packaging complexity. Designing such couplers with high efficiency, however, has proven difficult. In this paper, we use electromagnetic inverse design techniques to optimize a high efficiency two-layer perfectly vertical silicon grating coupler. Our base design achieves a chip-to-fiber coupling efficiency of over 99% (-0.04 dB) at 1550 nm. Using this base design, we apply subsequent constrained optimizations to achieve vertical couplers with over 96% efficiency which are fabricable using a 65 nm process.

  4. Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations

    DOE PAGES

    Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha

    2015-04-30

    Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less

  5. Energy efficient LED layout optimization for near-uniform illumination

    NASA Astrophysics Data System (ADS)

    Ali, Ramy E.; Elgala, Hany

    2016-09-01

    In this paper, we consider the problem of designing energy efficient light emitting diodes (LEDs) layout while satisfying the illumination constraints. Towards this objective, we present a simple approach to the illumination design problem based on the concept of the virtual LED. We formulate a constrained optimization problem for minimizing the power consumption while maintaining a near-uniform illumination throughout the room. By solving the resulting constrained linear program, we obtain the number of required LEDs and the optimal output luminous intensities that achieve the desired illumination constraints.

  6. Constrained optimization of sequentially generated entangled multiqubit states

    NASA Astrophysics Data System (ADS)

    Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique

    2009-08-01

    We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.

  7. Solution of nonlinear multivariable constrained systems using a gradient projection digital algorithm that is insensitive to the initial state

    NASA Technical Reports Server (NTRS)

    Hargrove, A.

    1982-01-01

    Optimal digital control of nonlinear multivariable constrained systems was studied. The optimal controller in the form of an algorithm was improved and refined by reducing running time and storage requirements. A particularly difficult system of nine nonlinear state variable equations was chosen as a test problem for analyzing and improving the controller. Lengthy analysis, modeling, computing and optimization were accomplished. A remote interactive teletype terminal was installed. Analysis requiring computer usage of short duration was accomplished using Tuskegee's VAX 11/750 system.

  8. Modified Backtracking Search Optimization Algorithm Inspired by Simulated Annealing for Constrained Engineering Optimization Problems

    PubMed Central

    Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen

    2018-01-01

    The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635

  9. Management of Occupational Exposure to Engineered Nanoparticles Through a Chance-Constrained Nonlinear Programming Approach

    PubMed Central

    Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin

    2013-01-01

    Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties. PMID:23531490

  10. A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.

    PubMed

    Quan, Quan; Cai, Kai-Yuan

    2016-02-01

    In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.

  11. Management of occupational exposure to engineered nanoparticles through a chance-constrained nonlinear programming approach.

    PubMed

    Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin

    2013-03-26

    Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties.

  12. Spectral Prior Image Constrained Compressed Sensing (Spectral PICCS) for Photon-Counting Computed Tomography

    PubMed Central

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-01-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in-vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43~73%) without sacrificing CT number accuracy or spatial resolution. PMID:27551878

  13. An inexact chance-constrained programming model for water quality management in Binhai New Area of Tianjin, China.

    PubMed

    Xie, Y L; Li, Y P; Huang, G H; Li, Y F; Chen, L R

    2011-04-15

    In this study, an inexact-chance-constrained water quality management (ICC-WQM) model is developed for planning regional environmental management under uncertainty. This method is based on an integration of interval linear programming (ILP) and chance-constrained programming (CCP) techniques. ICC-WQM allows uncertainties presented as both probability distributions and interval values to be incorporated within a general optimization framework. Complexities in environmental management systems can be systematically reflected, thus applicability of the modeling process can be highly enhanced. The developed method is applied to planning chemical-industry development in Binhai New Area of Tianjin, China. Interval solutions associated with different risk levels of constraint violation have been obtained. They can be used for generating decision alternatives and thus help decision makers identify desired policies under various system-reliability constraints of water environmental capacity of pollutant. Tradeoffs between system benefits and constraint-violation risks can also be tackled. They are helpful for supporting (a) decision of wastewater discharge and government investment, (b) formulation of local policies regarding water consumption, economic development and industry structure, and (c) analysis of interactions among economic benefits, system reliability and pollutant discharges. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Characterizing Variability of Modular Brain Connectivity with Constrained Principal Component Analysis

    PubMed Central

    Hirayama, Jun-ichiro; Hyvärinen, Aapo; Kiviniemi, Vesa; Kawanabe, Motoaki; Yamashita, Okito

    2016-01-01

    Characterizing the variability of resting-state functional brain connectivity across subjects and/or over time has recently attracted much attention. Principal component analysis (PCA) serves as a fundamental statistical technique for such analyses. However, performing PCA on high-dimensional connectivity matrices yields complicated “eigenconnectivity” patterns, for which systematic interpretation is a challenging issue. Here, we overcome this issue with a novel constrained PCA method for connectivity matrices by extending the idea of the previously proposed orthogonal connectivity factorization method. Our new method, modular connectivity factorization (MCF), explicitly introduces the modularity of brain networks as a parametric constraint on eigenconnectivity matrices. In particular, MCF analyzes the variability in both intra- and inter-module connectivities, simultaneously finding network modules in a principled, data-driven manner. The parametric constraint provides a compact module-based visualization scheme with which the result can be intuitively interpreted. We develop an optimization algorithm to solve the constrained PCA problem and validate our method in simulation studies and with a resting-state functional connectivity MRI dataset of 986 subjects. The results show that the proposed MCF method successfully reveals the underlying modular eigenconnectivity patterns in more general situations and is a promising alternative to existing methods. PMID:28002474

  15. Spectral prior image constrained compressed sensing (spectral PICCS) for photon-counting computed tomography

    NASA Astrophysics Data System (ADS)

    Yu, Zhicong; Leng, Shuai; Li, Zhoubo; McCollough, Cynthia H.

    2016-09-01

    Photon-counting computed tomography (PCCT) is an emerging imaging technique that enables multi-energy imaging with only a single scan acquisition. To enable multi-energy imaging, the detected photons corresponding to the full x-ray spectrum are divided into several subgroups of bin data that correspond to narrower energy windows. Consequently, noise in each energy bin increases compared to the full-spectrum data. This work proposes an iterative reconstruction algorithm for noise suppression in the narrower energy bins used in PCCT imaging. The algorithm is based on the framework of prior image constrained compressed sensing (PICCS) and is called spectral PICCS; it uses the full-spectrum image reconstructed using conventional filtered back-projection as the prior image. The spectral PICCS algorithm is implemented using a constrained optimization scheme with adaptive iterative step sizes such that only two tuning parameters are required in most cases. The algorithm was first evaluated using computer simulations, and then validated by both physical phantoms and in vivo swine studies using a research PCCT system. Results from both computer-simulation and experimental studies showed substantial image noise reduction in narrow energy bins (43-73%) without sacrificing CT number accuracy or spatial resolution.

  16. Thermally-Constrained Fuel-Optimal ISS Maneuvers

    NASA Technical Reports Server (NTRS)

    Bhatt, Sagar; Svecz, Andrew; Alaniz, Abran; Jang, Jiann-Woei; Nguyen, Louis; Spanos, Pol

    2015-01-01

    Optimal Propellant Maneuvers (OPMs) are now being used to rotate the International Space Station (ISS) and have saved hundreds of kilograms of propellant over the last two years. The savings are achieved by commanding the ISS to follow a pre-planned attitude trajectory optimized to take advantage of environmental torques. The trajectory is obtained by solving an optimal control problem. Prior to use on orbit, OPM trajectories are screened to ensure a static sun vector (SSV) does not occur during the maneuver. The SSV is an indicator that the ISS hardware temperatures may exceed thermal limits, causing damage to the components. In this paper, thermally-constrained fuel-optimal trajectories are presented that avoid an SSV and can be used throughout the year while still reducing propellant consumption significantly.

  17. Describing litho-constrained layout by a high-resolution model filter

    NASA Astrophysics Data System (ADS)

    Tsai, Min-Chun

    2008-05-01

    A novel high-resolution model (HRM) filtering technique was proposed to describe litho-constrained layouts. Litho-constrained layouts are layouts that have difficulties to pattern or are highly sensitive to process-fluctuations under current lithography technologies. HRM applies a short-wavelength (or high NA) model simulation directly on the pre-OPC, original design layout to filter out low spatial-frequency regions, and retain high spatial-frequency components which are litho-constrained. Since no OPC neither mask-synthesis steps are involved, this new technique is highly efficient in run time and can be used in design stage to detect and fix litho-constrained patterns. This method has successfully captured all the hot-spots with less than 15% overshoots on a realistic 80 mm2 full-chip M1 layout in 65nm technology node. A step by step derivation of this HRM technique is presented in this paper.

  18. Trajectory optimization for the National Aerospace Plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1993-01-01

    The objective of this second phase research is to investigate the optimal ascent trajectory for the National Aerospace Plane (NASP) from runway take-off to orbital insertion and address the unique problems associated with the hypersonic flight trajectory optimization. The trajectory optimization problem for an aerospace plane is a highly challenging problem because of the complexity involved. Previous work has been successful in obtaining sub-optimal trajectories by using energy-state approximation and time-scale decomposition techniques. But it is known that the energy-state approximation is not valid in certain portions of the trajectory. This research aims at employing full dynamics of the aerospace plane and emphasizing direct trajectory optimization methods. The major accomplishments of this research include the first-time development of an inverse dynamics approach in trajectory optimization which enables us to generate optimal trajectories for the aerospace plane efficiently and reliably, and general analytical solutions to constrained hypersonic trajectories that has wide application in trajectory optimization as well as in guidance and flight dynamics. Optimal trajectories in abort landing and ascent augmented with rocket propulsion and thrust vectoring control were also investigated. Motivated by this study, a new global trajectory optimization tool using continuous simulated annealing and a nonlinear predictive feedback guidance law have been under investigation and some promising results have been obtained, which may well lead to more significant development and application in the near future.

  19. Application of level set method to optimal vibration control of plate structures

    NASA Astrophysics Data System (ADS)

    Ansari, M.; Khajepour, A.; Esmailzadeh, E.

    2013-02-01

    Vibration control plays a crucial role in many structures, especially in the lightweight ones. One of the most commonly practiced method to suppress the undesirable vibration of structures is to attach patches of the constrained layer damping (CLD) onto the surface of the structure. In order to consider the weight efficiency of a structure, the best shapes and locations of the CLD patches should be determined to achieve the optimum vibration suppression with minimum usage of the CLD patches. This paper proposes a novel topology optimization technique that can determine the best shape and location of the applied CLD patches, simultaneously. Passive vibration control is formulated in the context of the level set method, which is a numerical technique to track shapes and locations concurrently. The optimal damping set could be found in a structure, in its fundamental vibration mode, such that the maximum modal loss factor of the system is achieved. Two different plate structures will be considered and the damping patches will be optimally located on them. At the same time, the best shapes of the damping patches will be determined too. In one example, the numerical results will be compared with those obtained from the experimental tests to validate the accuracy of the proposed method. This comparison reveals the effectiveness of the level set approach in finding the optimum shape and location of the CLD patches.

  20. Scientific Discovery through Advanced Computing (SciDAC-3) Partnership Project Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, Forest M.; Bochev, Pavel B.; Cameron-Smith, Philip J..

    The Applying Computationally Efficient Schemes for BioGeochemical Cycles ACES4BGC Project is advancing the predictive capabilities of Earth System Models (ESMs) by reducing two of the largest sources of uncertainty, aerosols and biospheric feedbacks, with a highly efficient computational approach. In particular, this project is implementing and optimizing new computationally efficient tracer advection algorithms for large numbers of tracer species; adding important biogeochemical interactions between the atmosphere, land, and ocean models; and applying uncertainty quanti cation (UQ) techniques to constrain process parameters and evaluate uncertainties in feedbacks between biogeochemical cycles and the climate system.

  1. Computational problems in autoregressive moving average (ARMA) models

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.

    1981-01-01

    The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.

  2. Superiorization-based multi-energy CT image reconstruction

    PubMed Central

    Yang, Q; Cong, W; Wang, G

    2017-01-01

    The recently-developed superiorization approach is efficient and robust for solving various constrained optimization problems. This methodology can be applied to multi-energy CT image reconstruction with the regularization in terms of the prior rank, intensity and sparsity model (PRISM). In this paper, we propose a superiorized version of the simultaneous algebraic reconstruction technique (SART) based on the PRISM model. Then, we compare the proposed superiorized algorithm with the Split-Bregman algorithm in numerical experiments. The results show that both the Superiorized-SART and the Split-Bregman algorithms generate good results with weak noise and reduced artefacts. PMID:28983142

  3. Development of homotopy algorithms for fixed-order mixed H2/H(infinity) controller synthesis

    NASA Technical Reports Server (NTRS)

    Whorton, M.; Buschek, H.; Calise, A. J.

    1994-01-01

    A major difficulty associated with H-infinity and mu-synthesis methods is the order of the resulting compensator. Whereas model and/or controller reduction techniques are sometimes applied, performance and robustness properties are not preserved. By directly constraining compensator order during the optimization process, these properties are better preserved, albeit at the expense of computational complexity. This paper presents a novel homotopy algorithm to synthesize fixed-order mixed H2/H-infinity compensators. Numerical results are presented for a four-disk flexible structure to evaluate the efficiency of the algorithm.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, J.V.

    The published work on exact penalization is indeed vast. Recently this work has indicated an intimate relationship between exact penalization, Lagrange multipliers, and problem stability or calmness. In the present work we chronicle this development within a simple idealized problem framework, wherein we unify, extend, and refine much of the known theory. In particular, most of the foundations for constrained optimization are developed with the aid of exact penalization techniques. Our approach is highly geometric and is based upon the elementary subdifferential theory for distance functions. It is assumed that the reader is familiar with the theory of convex setsmore » and functions. 54 refs.« less

  5. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    PubMed

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. A quasi-Newton approach to optimization problems with probability density constraints. [problem solving in mathematical programming

    NASA Technical Reports Server (NTRS)

    Tapia, R. A.; Vanrooy, D. L.

    1976-01-01

    A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.

  7. Energetic Materials Optimization via Constrained Search

    DTIC Science & Technology

    2015-06-01

    steps. 3. Optimization Methodology Our optimization problem is formulated as a constrained maximization: max x∈CCS P (x) s.t. : TED ( x )− 9.75 ≥ 0 SV (x)− 9...0 5− SA(x) ≥ 0, (1) where TED ( x ) is the total energy of detonation (TED) of compound x from the chosen chemical subspace (CCS) of chemical compound...max problem, max x∈CCS min λ∈R3+ P (x)− λTC(x), (2) where C(x) is the vector of constraint violations, i.e., η(9.75 − TED ( x )), η(9 − SV (x)), η(SA(x

  8. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1991-01-01

    A method for using system identification techniques to improve airframe finite element models was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  9. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  10. Pumping strategies for management of a shallow water table: The value of the simulation-optimization approach

    USGS Publications Warehouse

    Barlow, P.M.; Wagner, B.J.; Belitz, K.

    1996-01-01

    The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.

  11. Development and demonstration of an on-board mission planner for helicopters

    NASA Technical Reports Server (NTRS)

    Deutsch, Owen L.; Desai, Mukund

    1988-01-01

    Mission management tasks can be distributed within a planning hierarchy, where each level of the hierarchy addresses a scope of action, and associated time scale or planning horizon, and requirements for plan generation response time. The current work is focused on the far-field planning subproblem, with a scope and planning horizon encompassing the entire mission and with a response time required to be about two minutes. The far-feld planning problem is posed as a constrained optimization problem and algorithms and structural organizations are proposed for the solution. Algorithms are implemented in a developmental environment, and performance is assessed with respect to optimality and feasibility for the intended application and in comparison with alternative algorithms. This is done for the three major components of far-field planning: goal planning, waypoint path planning, and timeline management. It appears feasible to meet performance requirements on a 10 Mips flyable processor (dedicated to far-field planning) using a heuristically-guided simulated annealing technique for the goal planner, a modified A* search for the waypoint path planner, and a speed scheduling technique developed for this project.

  12. Linear Approximation to Optimal Control Allocation for Rocket Nozzles with Elliptical Constraints

    NASA Technical Reports Server (NTRS)

    Orr, Jeb S.; Wall, Johnm W.

    2011-01-01

    In this paper we present a straightforward technique for assessing and realizing the maximum control moment effectiveness for a launch vehicle with multiple constrained rocket nozzles, where elliptical deflection limits in gimbal axes are expressed as an ensemble of independent quadratic constraints. A direct method of determining an approximating ellipsoid that inscribes the set of attainable angular accelerations is derived. In the case of a parameterized linear generalized inverse, the geometry of the attainable set is computationally expensive to obtain but can be approximated to a high degree of accuracy with the proposed method. A linear inverse can then be optimized to maximize the volume of the true attainable set by maximizing the volume of the approximating ellipsoid. The use of a linear inverse does not preclude the use of linear methods for stability analysis and control design, preferred in practice for assessing the stability characteristics of the inertial and servoelastic coupling appearing in large boosters. The present techniques are demonstrated via application to the control allocation scheme for a concept heavy-lift launch vehicle.

  13. An adaptive evolutionary multi-objective approach based on simulated annealing.

    PubMed

    Li, H; Landa-Silva, D

    2011-01-01

    A multi-objective optimization problem can be solved by decomposing it into one or more single objective subproblems in some multi-objective metaheuristic algorithms. Each subproblem corresponds to one weighted aggregation function. For example, MOEA/D is an evolutionary multi-objective optimization (EMO) algorithm that attempts to optimize multiple subproblems simultaneously by evolving a population of solutions. However, the performance of MOEA/D highly depends on the initial setting and diversity of the weight vectors. In this paper, we present an improved version of MOEA/D, called EMOSA, which incorporates an advanced local search technique (simulated annealing) and adapts the search directions (weight vectors) corresponding to various subproblems. In EMOSA, the weight vector of each subproblem is adaptively modified at the lowest temperature in order to diversify the search toward the unexplored parts of the Pareto-optimal front. Our computational results show that EMOSA outperforms six other well established multi-objective metaheuristic algorithms on both the (constrained) multi-objective knapsack problem and the (unconstrained) multi-objective traveling salesman problem. Moreover, the effects of the main algorithmic components and parameter sensitivities on the search performance of EMOSA are experimentally investigated.

  14. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  15. Visualizing deep neural network by alternately image blurring and deblurring.

    PubMed

    Wang, Feng; Liu, Haijun; Cheng, Jian

    2018-01-01

    Visualization from trained deep neural networks has drawn massive public attention in recent. One of the visualization approaches is to train images maximizing the activation of specific neurons. However, directly maximizing the activation would lead to unrecognizable images, which cannot provide any meaningful information. In this paper, we introduce a simple but effective technique to constrain the optimization route of the visualization. By adding two totally inverse transformations, image blurring and deblurring, to the optimization procedure, recognizable images can be created. Our algorithm is good at extracting the details in the images, which are usually filtered by previous methods in the visualizations. Extensive experiments on AlexNet, VGGNet and GoogLeNet illustrate that we can better understand the neural networks utilizing the knowledge obtained by the visualization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Remote Sensing of Precipitation from Airborne and Spaceborne Radar. Chapter 13

    NASA Technical Reports Server (NTRS)

    Munchak, S. Joseph

    2017-01-01

    Weather radar measurements from airborne or satellite platforms can be an effective remote sensing tool for examining the three-dimensional structures of clouds and precipitation. This chapter describes some fundamental properties of radar measurements and their dependence on the particle size distribution (PSD) and radar frequency. The inverse problem of solving for the vertical profile of PSD from a profile of measured reflectivity is stated as an optimal estimation problem for single- and multi-frequency measurements. Phenomena that can change the measured reflectivity Z(sub m) from its intrinsic value Z(sub e), namely attenuation, non-uniform beam filling, and multiple scattering, are described and mitigation of these effects in the context of the optimal estimation framework is discussed. Finally, some techniques involving the use of passive microwave measurements to further constrain the retrieval of the PSD are presented.

  17. An energy-optimal solution for transportation control of cranes with double pendulum dynamics: Design and experiments

    NASA Astrophysics Data System (ADS)

    Sun, Ning; Wu, Yiming; Chen, He; Fang, Yongchun

    2018-03-01

    Underactuated cranes play an important role in modern industry. Specifically, in most situations of practical applications, crane systems exhibit significant double pendulum characteristics, which makes the control problem quite challenging. Moreover, most existing planners/controllers obtained with standard methods/techniques for double pendulum cranes cannot minimize the energy consumption when fulfilling the transportation tasks. Therefore, from a practical perspective, this paper proposes an energy-optimal solution for transportation control of double pendulum cranes. By applying the presented approach, the transportation objective, including fast trolley positioning and swing elimination, is achieved with minimized energy consumption, and the residual oscillations are suppressed effectively with all the state constrains being satisfied during the entire transportation process. As far as we know, this is the first energy-optimal solution for transportation control of underactuated double pendulum cranes with various state and control constraints. Hardware experimental results are included to verify the effectiveness of the proposed approach, whose superior performance is reflected by being experimentally compared with some comparative controllers.

  18. Robust ADP Design for Continuous-Time Nonlinear Systems With Output Constraints.

    PubMed

    Fan, Bo; Yang, Qinmin; Tang, Xiaoyu; Sun, Youxian

    2018-06-01

    In this paper, a novel robust adaptive dynamic programming (RADP)-based control strategy is presented for the optimal control of a class of output-constrained continuous-time unknown nonlinear systems. Our contribution includes a step forward beyond the usual optimal control result to show that the output of the plant is always within user-defined bounds. To achieve the new results, an error transformation technique is first established to generate an equivalent nonlinear system, whose asymptotic stability guarantees both the asymptotic stability and the satisfaction of the output restriction of the original system. Furthermore, RADP algorithms are developed to solve the transformed nonlinear optimal control problem with completely unknown dynamics as well as a robust design to guarantee the stability of the closed-loop systems in the presence of unavailable internal dynamic state. Via small-gain theorem, asymptotic stability of the original and transformed nonlinear system is theoretically guaranteed. Finally, comparison results demonstrate the merits of the proposed control policy.

  19. Proportional Topology Optimization: A New Non-Sensitivity Method for Solving Stress Constrained and Minimum Compliance Problems and Its Implementation in MATLAB

    PubMed Central

    Biyikli, Emre; To, Albert C.

    2015-01-01

    A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849

  20. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    PubMed

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  1. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    PubMed Central

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  2. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  3. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  4. Efficient and Accurate Optimal Linear Phase FIR Filter Design Using Opposition-Based Harmony Search Algorithm

    PubMed Central

    Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

  5. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  6. Optimization of contrast resolution by genetic algorithm in ultrasound tissue harmonic imaging.

    PubMed

    Ménigot, Sébastien; Girault, Jean-Marc

    2016-09-01

    The development of ultrasound imaging techniques such as pulse inversion has improved tissue harmonic imaging. Nevertheless, no recommendation has been made to date for the design of the waveform transmitted through the medium being explored. Our aim was therefore to find automatically the optimal "imaging" wave which maximized the contrast resolution without a priori information. To overcome assumption regarding the waveform, a genetic algorithm investigated the medium thanks to the transmission of stochastic "explorer" waves. Moreover, these stochastic signals could be constrained by the type of generator available (bipolar or arbitrary). To implement it, we changed the current pulse inversion imaging system by including feedback. Thus the method optimized the contrast resolution by adaptively selecting the samples of the excitation. In simulation, we benchmarked the contrast effectiveness of the best found transmitted stochastic commands and the usual fixed-frequency command. The optimization method converged quickly after around 300 iterations in the same optimal area. These results were confirmed experimentally. In the experimental case, the contrast resolution measured on a radiofrequency line could be improved by 6% with a bipolar generator and it could still increase by 15% with an arbitrary waveform generator. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Genetic algorithm-based multi-objective optimal absorber system for three-dimensional seismic structures

    NASA Astrophysics Data System (ADS)

    Ren, Wenjie; Li, Hongnan; Song, Gangbing; Huo, Linsheng

    2009-03-01

    The problem of optimizing an absorber system for three-dimensional seismic structures is addressed. The objective is to determine the number and position of absorbers to minimize the coupling effects of translation-torsion of structures at minimum cost. A procedure for a multi-objective optimization problem is developed by integrating a dominance-based selection operator and a dominance-based penalty function method. Based on the two-branch tournament genetic algorithm, the selection operator is constructed by evaluating individuals according to their dominance in one run. The technique guarantees the better performing individual winning its competition, provides a slight selection pressure toward individuals and maintains diversity in the population. Moreover, due to the evaluation for individuals in each generation being finished in one run, less computational effort is taken. Penalty function methods are generally used to transform a constrained optimization problem into an unconstrained one. The dominance-based penalty function contains necessary information on non-dominated character and infeasible position of an individual, essential for success in seeking a Pareto optimal set. The proposed approach is used to obtain a set of non-dominated designs for a six-storey three-dimensional building with shape memory alloy dampers subjected to earthquake.

  8. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    PubMed Central

    Hu, Yu-Chen

    2018-01-01

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%. PMID:29702607

  9. Toward Overcoming the Local Minimum Trap in MFBD

    DTIC Science & Technology

    2015-07-14

    the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind

  10. Estimating contrast transfer function and associated parameters by constrained non-linear optimization.

    PubMed

    Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W

    2009-03-01

    The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.

  11. Domain decomposition in time for PDE-constrained optimization

    DOE PAGES

    Barker, Andrew T.; Stoll, Martin

    2015-08-28

    Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.

  12. Comments on "The multisynapse neural network and its application to fuzzy clustering".

    PubMed

    Yu, Jian; Hao, Pengwei

    2005-05-01

    In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.

  13. River velocities from sequential multispectral remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Mied, Richard P.

    2013-06-01

    We address the problem of extracting surface velocities from a pair of multispectral remote sensing images over rivers using a new nonlinear multiple-tracer form of the global optimal solution (GOS). The derived velocity field is a valid solution across the image domain to the nonlinear system of equations obtained by minimizing a cost function inferred from the conservation constraint equations for multiple tracers. This is done by deriving an iteration equation for the velocity, based on the multiple-tracer displaced frame difference equations, and a local approximation to the velocity field. The number of velocity equations is greater than the number of velocity components, and thus overly constrain the solution. The iterative technique uses Gauss-Newton and Levenberg-Marquardt methods and our own algorithm of the progressive relaxation of the over-constraint. We demonstrate the nonlinear multiple-tracer GOS technique with sequential multispectral Landsat and ASTER images over a portion of the Potomac River in MD/VA, and derive a dense field of accurate velocity vectors. We compare the GOS river velocities with those from over 12 years of data at four NOAA reference stations, and find good agreement. We discuss how to find the appropriate spatial and temporal resolutions to allow optimization of the technique for specific rivers.

  14. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    PubMed Central

    Trucco, Andrea; Traverso, Federico; Crocco, Marco

    2015-01-01

    For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987

  15. Nonlinear programming extensions to rational function approximations of unsteady aerodynamics

    NASA Technical Reports Server (NTRS)

    Tiffany, Sherwood H.; Adams, William M., Jr.

    1987-01-01

    This paper deals with approximating unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft. Two methods of formulating these approximations are extended to include both the same flexibility in constraining them and the same methodology in optimizing nonlinear parameters as another currently used 'extended least-squares' method. Optimal selection of 'nonlinear' parameters is made in each of the three methods by use of the same nonlinear (nongradient) optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is of lower order than that required when no optimization of the nonlinear terms is performed. The free 'linear' parameters are determined using least-squares matrix techniques on a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from the different approaches are described, and results are presented which show comparative evaluations from application of each of the extended methods to a numerical example. The results obtained for the example problem show a significant (up to 63 percent) reduction in the number of differential equations used to represent the unsteady aerodynamic forces in linear time-invariant equations of motion as compared to a conventional method in which nonlinear terms are not optimized.

  16. Optimal time points sampling in pathway modelling.

    PubMed

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  17. A policy iteration approach to online optimal control of continuous-time constrained-input systems.

    PubMed

    Modares, Hamidreza; Naghibi Sistani, Mohammad-Bagher; Lewis, Frank L

    2013-09-01

    This paper is an effort towards developing an online learning algorithm to find the optimal control solution for continuous-time (CT) systems subject to input constraints. The proposed method is based on the policy iteration (PI) technique which has recently evolved as a major technique for solving optimal control problems. Although a number of online PI algorithms have been developed for CT systems, none of them take into account the input constraints caused by actuator saturation. In practice, however, ignoring these constraints leads to performance degradation or even system instability. In this paper, to deal with the input constraints, a suitable nonquadratic functional is employed to encode the constraints into the optimization formulation. Then, the proposed PI algorithm is implemented on an actor-critic structure to solve the Hamilton-Jacobi-Bellman (HJB) equation associated with this nonquadratic cost functional in an online fashion. That is, two coupled neural network (NN) approximators, namely an actor and a critic are tuned online and simultaneously for approximating the associated HJB solution and computing the optimal control policy. The critic is used to evaluate the cost associated with the current policy, while the actor is used to find an improved policy based on information provided by the critic. Convergence to a close approximation of the HJB solution as well as stability of the proposed feedback control law are shown. Simulation results of the proposed method on a nonlinear CT system illustrate the effectiveness of the proposed approach. Copyright © 2013 ISA. All rights reserved.

  18. A generalized fuzzy credibility-constrained linear fractional programming approach for optimal irrigation water allocation under uncertainty

    NASA Astrophysics Data System (ADS)

    Zhang, Chenglong; Guo, Ping

    2017-10-01

    The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.

  19. Use of system identification techniques for improving airframe finite element models using test data

    NASA Technical Reports Server (NTRS)

    Hanagud, Sathya V.; Zhou, Weiyu; Craig, James I.; Weston, Neil J.

    1993-01-01

    A method for using system identification techniques to improve airframe finite element models using test data was developed and demonstrated. The method uses linear sensitivity matrices to relate changes in selected physical parameters to changes in the total system matrices. The values for these physical parameters were determined using constrained optimization with singular value decomposition. The method was confirmed using both simple and complex finite element models for which pseudo-experimental data was synthesized directly from the finite element model. The method was then applied to a real airframe model which incorporated all of the complexities and details of a large finite element model and for which extensive test data was available. The method was shown to work, and the differences between the identified model and the measured results were considered satisfactory.

  20. Coordinated trajectory planning of dual-arm space robot using constrained particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich

    2018-05-01

    Application of the multi-arm space robot will be more effective than single arm especially when the target is tumbling. This paper investigates the application of particle swarm optimization (PSO) strategy to coordinated trajectory planning of the dual-arm space robot in free-floating mode. In order to overcome the dynamics singularities issue, the direct kinematics equations in conjunction with constrained PSO are employed for coordinated trajectory planning of dual-arm space robot. The joint trajectories are parametrized with Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for coordinated trajectory planning of two kinematically redundant manipulators mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.

  1. Deterministic Reconfigurable Control Design for the X-33 Vehicle

    NASA Technical Reports Server (NTRS)

    Wagner, Elaine A.; Burken, John J.; Hanson, Curtis E.; Wohletz, Jerry M.

    1998-01-01

    In the event of a control surface failure, the purpose of a reconfigurable control system is to redistribute the control effort among the remaining working surfaces such that satisfactory stability and performance are retained. Four reconfigurable control design methods were investigated for the X-33 vehicle: Redistributed Pseudo-Inverse, General Constrained Optimization, Automated Failure Dependent Gain Schedule, and an Off-line Nonlinear General Constrained Optimization. The Off-line Nonlinear General Constrained Optimization approach was chosen for implementation on the X-33. Two example failures are shown, a right outboard elevon jam at 25 deg. at a Mach 3 entry condition, and a left rudder jam at 30 degrees. Note however, that reconfigurable control laws have been designed for the entire flight envelope. Comparisons between responses with the nominal controller and reconfigurable controllers show the benefits of reconfiguration. Single jam aerosurface failures were considered, and failure detection and identification is considered accomplished in the actuator controller. The X-33 flight control system will incorporate reconfigurable flight control in the baseline system.

  2. Vehicle routing problem with time windows using natural inspired algorithms

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  3. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  4. An artificial bee colony algorithm for locating the critical slip surface in slope stability analysis

    NASA Astrophysics Data System (ADS)

    Kang, Fei; Li, Junjie; Ma, Zhenyue

    2013-02-01

    Determination of the critical slip surface with the minimum factor of safety of a slope is a difficult constrained global optimization problem. In this article, an artificial bee colony algorithm with a multi-slice adjustment method is proposed for locating the critical slip surfaces of soil slopes, and the Spencer method is employed to calculate the factor of safety. Six benchmark examples are presented to illustrate the reliability and efficiency of the proposed technique, and it is also compared with some well-known or recent algorithms for the problem. The results show that the new algorithm is promising in terms of accuracy and efficiency.

  5. Phase retrieval in generalized optical interferometry systems.

    PubMed

    Farriss, Wesley E; Fienup, James R; Malhotra, Tanya; Vamivakas, A Nick

    2018-02-05

    Modal analysis of an optical field via generalized interferometry (GI) is a novel technique that treats said field as a linear superposition of transverse modes and recovers the amplitudes of modal weighting coefficients. We use phase retrieval by nonlinear optimization to recover the phase of these modal weighting coefficients. Information diversity increases the robustness of the algorithm by better constraining the solution. Additionally, multiple sets of random starting phase values assist the algorithm in overcoming local minima. The algorithm was able to recover nearly all coefficient phases for simulated fields consisting of up to 21 superpositioned Hermite Gaussian modes from simulated data and proved to be resilient to shot noise.

  6. CAD of control systems: Application of nonlinear programming to a linear quadratic formulation

    NASA Technical Reports Server (NTRS)

    Fleming, P.

    1983-01-01

    The familiar suboptimal regulator design approach is recast as a constrained optimization problem and incorporated in a Computer Aided Design (CAD) package where both design objective and constraints are quadratic cost functions. This formulation permits the separate consideration of, for example, model following errors, sensitivity measures and control energy as objectives to be minimized or limits to be observed. Efficient techniques for computing the interrelated cost functions and their gradients are utilized in conjunction with a nonlinear programming algorithm. The effectiveness of the approach and the degree of insight into the problem which it affords is illustrated in a helicopter regulation design example.

  7. Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouri, Drew Philip; Surowiec, Thomas M.

    Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less

  8. Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization

    DOE PAGES

    Kouri, Drew Philip; Surowiec, Thomas M.

    2018-06-05

    Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less

  9. A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng

    2009-11-01

    Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.

  10. Ghost artifact cancellation using phased array processing.

    PubMed

    Kellman, P; McVeigh, E R

    2001-08-01

    In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples.

  11. Ghost Artifact Cancellation Using Phased Array Processing

    PubMed Central

    Kellman, Peter; McVeigh, Elliot R.

    2007-01-01

    In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638

  12. Mixed-Strategy Chance Constrained Optimal Control

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.

    2013-01-01

    This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.

  13. A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.

    PubMed

    Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa

    2018-02-01

    Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.

  14. Hybrid real-code ant colony optimisation for constrained mechanical design

    NASA Astrophysics Data System (ADS)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  15. Detection and Length Estimation of Linear Scratch on Solid Surfaces Using an Angle Constrained Ant Colony Technique

    NASA Astrophysics Data System (ADS)

    Pal, Siddharth; Basak, Aniruddha; Das, Swagatam

    In many manufacturing areas the detection of surface defects is one of the most important processes in quality control. Currently in order to detect small scratches on solid surfaces most of the industries working on material manufacturing rely on visual inspection primarily. In this article we propose a hybrid computational intelligence technique to automatically detect a linear scratch from a solid surface and estimate its length (in pixel unit) simultaneously. The approach is based on a swarm intelligence algorithm called Ant Colony Optimization (ACO) and image preprocessing with Wiener and Sobel filters as well as the Canny edge detector. The ACO algorithm is mostly used to compensate for the broken parts of the scratch. Our experimental results confirm that the proposed technique can be used for detecting scratches from noisy and degraded images, even when it is very difficult for conventional image processing to distinguish the scratch area from its background.

  16. An Algorithmic, Pie-Crusting Medial Soft Tissue Release Reduces the Need for Constrained Inserts Patients With Severe Varus Deformity Undergoing Total Knee Arthroplasty.

    PubMed

    Goudarz Mehdikhani, Kaveh; Morales Moreno, Beatriz; Reid, Jeremy J; de Paz Nieves, Ana; Lee, Yuo-Yu; González Della Valle, Alejandro

    2016-07-01

    We studied the need to use a constrained insert for residual intraoperative instability and the 1-year result of patients undergoing total knee arthroplasty (TKA) for a varus deformity. In a control group, a "classic" subperiosteal release of the medial soft tissue sleeve was performed as popularized by pioneers of TKA. In the study group, an algorithmic approach that selectively releases and pie-crusts posteromedial structures in extension and anteromedial structures in flexion was used. All surgeries were performed by a single surgeon using measured resection technique, and posterior-stabilized, cemented implants. There were 228 TKAs in the control group and 188 in the study group. Outcome variables included the use of a constrained insert, and the Knee Society Score at 6 weeks, 4 months, and 1 year postoperatively. The effect of the release technique on use of constrained inserts and clinical outcomes were analyzed in a multivariate model controlling for age, sex, body mass index, and severity of deformity. The use of constrained inserts was significantly lower in study than in control patients (8% vs 18%; P = .002). There was no difference in the Knee Society Score and range of motion between the groups at last follow-up. No patient developed postoperative medial instability. This algorithmic, pie-crusting release technique resulted in a significant reduction in the use of constrained inserts with no detrimental effects in clinical results, joint function, and stability. As constrained TKA implants are more costly than nonconstrained ones, if the adopted technique proves to be safe in the long term, it may cause a positive shift in value for hospitals and cost savings in the health care system. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Necessary optimality conditions for infinite dimensional state constrained control problems

    NASA Astrophysics Data System (ADS)

    Frankowska, H.; Marchini, E. M.; Mazzola, M.

    2018-06-01

    This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.

  18. Flow range enhancement by secondary flow effect in low solidity circular cascade diffusers

    NASA Astrophysics Data System (ADS)

    Sakaguchi, Daisaku; Tun, Min Thaw; Mizokoshi, Kanata; Kishikawa, Daiki

    2014-08-01

    High-pressure ratio and wide operating range are highly required for compressors and blowers. The technical issue of the design is achievement of suppression of flow separation at small flow rate without deteriorating the efficiency at design flow rate. A numerical simulation is very effective in design procedure, however, cost of the numerical simulation is generally high during the practical design process, and it is difficult to confirm the optimal design which is combined with many parameters. A multi-objective optimization technique is the idea that has been proposed for solving the problem in practical design process. In this study, a Low Solidity circular cascade Diffuser (LSD) in a centrifugal blower is successfully designed by means of multi-objective optimization technique. An optimization code with a meta-model assisted evolutionary algorithm is used with a commercial CFD code ANSYS-CFX. The optimization is aiming at improving the static pressure coefficient at design point and at low flow rate condition while constraining the slope of the lift coefficient curve. Moreover, a small tip clearance of the LSD blade was applied in order to activate and to stabilize the secondary flow effect at small flow rate condition. The optimized LSD blade has an extended operating range of 114 % towards smaller flow rate as compared to the baseline design without deteriorating the diffuser pressure recovery at design point. The diffuser pressure rise and operating flow range of the optimized LSD blade are experimentally verified by overall performance test. The detailed flow in the diffuser is also confirmed by means of a Particle Image Velocimeter. Secondary flow is clearly captured by PIV and it spreads to the whole area of LSD blade pitch. It is found that the optimized LSD blade shows good improvement of the blade loading in the whole operating range, while at small flow rate the flow separation on the LSD blade has been successfully suppressed by the secondary flow effect.

  19. Grid generation and adaptation via Monge-Kantorovich optimization in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Delzanno, Gian Luca; Chacon, Luis; Finn, John M.

    2008-11-01

    In a recent paper [1], Monge-Kantorovich (MK) optimization was proposed as a method of grid generation/adaptation in two dimensions (2D). The method is based on the minimization of the L2 norm of grid point displacement, constrained to producing a given positive-definite cell volume distribution (equidistribution constraint). The procedure gives rise to the Monge-Amp'ere (MA) equation: a single, non-linear scalar equation with no free-parameters. The MA equation was solved in Ref. [1] with the Jacobian Free Newton-Krylov technique and several challenging test cases were presented in squared domains in 2D. Here, we extend the work of Ref. [1]. We first formulate the MK approach in physical domains with curved boundary elements and in 3D. We then show the results of applying it to these more general cases. We show that MK optimization produces optimal grids in which the constraint is satisfied numerically to truncation error. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, submitted to Journal of Computational Physics (2008).

  20. Cohesive phase-field fracture and a PDE constrained optimization approach to fracture inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tupek, Michael R.

    2016-06-30

    In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- putmore » parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.« less

  1. Extracting archaeal populations from iron oxidizing systems

    NASA Astrophysics Data System (ADS)

    Whitmore, L. M.; Hutchison, J.; Chrisler, W.; Jay, Z.; Moran, J.; Inskeep, W.; Kreuzer, H.

    2013-12-01

    Unique environments in Yellowstone National Park offer exceptional conditions for studying microorganisms in extreme and constrained systems. However, samples from some extreme systems often contain inorganic components that pose complications during microbial and molecular analysis. Several archaeal species are found in acidic, geothermal ferric-oxyhydroxide mats; these species have been shown to adhere to mineral surfaces in flocculated colonies. For optimal microbial analysis, (microscopy, flow cytometry, genomic extractions, proteomic analysis, stable isotope analysis, and others), improved techniques are needed to better facilitate cell detachment and separation from mineral surfaces. As a requirement, these techniques must preserve cell structure while simultaneously minimizing organic carryover to downstream analysis. Several methods have been developed for removing sediments from mixed prokaryotic populations, including ultra-centrifugation, nycodenz gradient, sucrose cushions, and cell straining. In this study we conduct a comparative analysis of mechanisms used to detach archaeal cell populations from the mineral interface. Specifically, we evaluated mechanical and chemical approaches for cell separation and homogenization. Methods were compared using confocal microscopy, flow cytometry analyses, and real-time PCR detection. The methodology and approaches identified will be used to optimize biomass collection from environmental specimens or isolates grown with solid phases.

  2. Constrained variational calculus for higher order classical field theories

    NASA Astrophysics Data System (ADS)

    Campos, Cédric M.; de León, Manuel; Martín de Diego, David

    2010-11-01

    We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.

  3. Using game theory for perceptual tuned rate control algorithm in video coding

    NASA Astrophysics Data System (ADS)

    Luo, Jiancong; Ahmad, Ishfaq

    2005-03-01

    This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.

  4. Investigation of optimization-based reconstruction with an image-total-variation constraint in PET

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Ye, Jinghan; Chen, Buxin; Perkins, Amy E.; Rose, Sean; Sidky, Emil Y.; Kao, Chien-Min; Xia, Dan; Tung, Chi-Hua; Pan, Xiaochuan

    2016-08-01

    Interest remains in reconstruction-algorithm research and development for possible improvement of image quality in current PET imaging and for enabling innovative PET systems to enhance existing, and facilitate new, preclinical and clinical applications. Optimization-based image reconstruction has been demonstrated in recent years of potential utility for CT imaging applications. In this work, we investigate tailoring the optimization-based techniques to image reconstruction for PET systems with standard and non-standard scan configurations. Specifically, given an image-total-variation (TV) constraint, we investigated how the selection of different data divergences and associated parameters impacts the optimization-based reconstruction of PET images. The reconstruction robustness was explored also with respect to different data conditions and activity up-takes of practical relevance. A study was conducted particularly for image reconstruction from data collected by use of a PET configuration with sparsely populated detectors. Overall, the study demonstrates the robustness of the TV-constrained, optimization-based reconstruction for considerably different data conditions in PET imaging, as well as its potential to enable PET configurations with reduced numbers of detectors. Insights gained in the study may be exploited for developing algorithms for PET-image reconstruction and for enabling PET-configuration design of practical usefulness in preclinical and clinical applications.

  5. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  6. Robust Path Planning and Feedback Design Under Stochastic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars

    2008-01-01

    Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.

  7. Dynamic optimization and its relation to classical and quantum constrained systems

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo

    2017-08-01

    We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.

  8. State transformations and Hamiltonian structures for optimal control in discrete systems

    NASA Astrophysics Data System (ADS)

    Sieniutycz, S.

    2006-04-01

    Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.

  9. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    NASA Technical Reports Server (NTRS)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  10. MONSS: A multi-objective nonlinear simplex search approach

    NASA Astrophysics Data System (ADS)

    Zapotecas-Martínez, Saúl; Coello Coello, Carlos A.

    2016-01-01

    This article presents a novel methodology for dealing with continuous box-constrained multi-objective optimization problems (MOPs). The proposed algorithm adopts a nonlinear simplex search scheme in order to obtain multiple elements of the Pareto optimal set. The search is directed by a well-distributed set of weight vectors, each of which defines a scalarization problem that is solved by deforming a simplex according to the movements described by Nelder and Mead's method. Considering an MOP with n decision variables, the simplex is constructed using n+1 solutions which minimize different scalarization problems defined by n+1 neighbor weight vectors. All solutions found in the search are used to update a set of solutions considered to be the minima for each separate problem. In this way, the proposed algorithm collectively obtains multiple trade-offs among the different conflicting objectives, while maintaining a proper representation of the Pareto optimal front. In this article, it is shown that a well-designed strategy using just mathematical programming techniques can be competitive with respect to the state-of-the-art multi-objective evolutionary algorithms against which it was compared.

  11. Analysis of explicit model predictive control for path-following control

    PubMed Central

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080

  12. Analysis of explicit model predictive control for path-following control.

    PubMed

    Lee, Junho; Chang, Hyuk-Jun

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.

  13. Adaptive Multi-Agent Systems for Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.

  14. A Near-Optimal Distributed QoS Constrained Routing Algorithm for Multichannel Wireless Sensor Networks

    PubMed Central

    Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen

    2013-01-01

    One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.

  15. Detecting intention to execute the next movement while performing current movement from EEG using global optimal constrained ICA.

    PubMed

    Eilbeigi, Elnaz; Setarehdan, Seyed Kamaledin

    2018-05-26

    Brain-computer interfaces (BCIs) are a promising tool in neurorehabilitation. The intention to perform a motor action can be detected from brain signals and used to control robotic devices. Most previous studies have focused on the starting of movements from a resting state, while in daily life activities, motions occur continuously and the neural activities correlated to the evolving movements are yet to be investigated. First we investigate the existence of neural correlates of intention to replace an object on the table during a holding phase. Next, we present a new method to extract the movement-related cortical potentials (MRCP) from a single-trial EEG. A novel method called Global optimal constrained ICA (GocICA) is proposed to overcome the limitations of cICA which is implemented based on Particle Swarm Optimization (PSO) and Charged System Search (CSS) techniques. GocICA is then utilized for decoding the intention to grasp and lift and intention to replace movements where the results were compared. It was found that GocICA significantly improves the intention detection performance. Best results in offline detection were obtained with CSS-cICA for both kinds of intentions. Furthermore, pseudo-online decoding showed that GocICA was able to predict both intentions before the onset of related movements with the highest probability. Decoding of the next movement intention during current movement is possible, which can be used to create more natural neuroprostheses. The results demonstrate that GocICA is a promising new algorithm for single-trial MRCP detection which can be used for detecting other types of ERPs such as P300. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. 2D Inviscid and Viscous Inverse Design Using Continuous Adjoint and Lax-Wendroff Formulation

    NASA Astrophysics Data System (ADS)

    Proctor, Camron Lisle

    The continuous adjoint (CA) technique for optimization and/or inverse-design of aerodynamic components has seen nearly 30 years of documented success in academia. The benefits of using CA versus a direct sensitivity analysis are shown repeatedly in the literature. However, the use of CA in industry is relatively unheard-of. The sparseness of industry contributions to the field may be attributed to the tediousness of the derivation and/or to the difficulties in implementation due to the lack of well-documented adjoint numerical methods. The focus of this work has been to thoroughly document the techniques required to build a two-dimensional CA inverse-design tool. To this end, this work begins with a short background on computational fluid dynamics (CFD) and the use of optimization tools in conjunction with CFD tools to solve aerodynamic optimization problems. A thorough derivation of the continuous adjoint equations and the accompanying gradient calculations for inviscid and viscous constraining equations follows the introduction. Next, the numerical techniques used for solving the partial differential equations (PDEs) governing the flow equations and the adjoint equations are described. Numerical techniques for the supplementary equations are discussed briefly. Subsequently, a verification of the efficacy of the inverse design tool, for the inviscid adjoint equations as well as possible numerical implementation pitfalls are discussed. The NACA0012 airfoil is used as an initial airfoil and surface pressure distribution and the NACA16009 is used as the desired pressure and vice versa. Using a Savitsky-Golay gradient filter, convergence (defined as a cost function<1E-5) is reached in approximately 220 design iteration using 121 design variables. The inverse-design using inviscid adjoint equations results are followed by the discussion of the viscous inverse design results and techniques used to further the convergence of the optimizer. The relationship between limiting step-size and convergence in a line-search optimization is shown to slightly decrease the final cost function at significant computational cost. A gradient damping technique is presented and shown to increase the convergence rate for the optimization in viscous problems, at a negligible increase in computational cost, but is insufficient to converge the solution. Systematically including adjacent surface vertices in the perturbation of a design variable, also a surface vertex, is shown to affect the convergence capability of the viscous optimizer. Finally, a comparison of using inviscid adjoint equations, as opposed to viscous adjoint equations, on viscous flow is presented, and the inviscid adjoint paired with viscous flow is found to reduce the cost function further than the viscous adjoint for the presented problem.

  17. Parameterized LMI Based Diagonal Dominance Compensator Study for Polynomial Linear Parameter Varying System

    NASA Astrophysics Data System (ADS)

    Han, Xiaobao; Li, Huacong; Jia, Qiusheng

    2017-12-01

    For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.

  18. Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks

    Treesearch

    Bistra Dilkina; Rachel Houtman; Carla P. Gomes; Claire A. Montgomery; Kevin S. McKelvey; Katherine Kendall; Tabitha A. Graves; Richard Bernstein; Michael K. Schwartz

    2016-01-01

    Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a...

  19. Optical Correlation of Images With Signal-Dependent Noise Using Constrained-Modulation Filter Devices

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    Images with signal-dependent noise present challenges beyond those of images with additive white or colored signal-independent noise in terms of designing the optimal 4-f correlation filter that maximizes correlation-peak signal-to-noise ratio, or combinations of correlation-peak metrics. Determining the proper design becomes more difficult when the filter is to be implemented on a constrained-modulation spatial light modulator device. The design issues involved for updatable optical filters for images with signal-dependent film-grain noise and speckle noise are examined. It is shown that although design of the optimal linear filter in the Fourier domain is impossible for images with signal-dependent noise, proper nonlinear preprocessing of the images allows the application of previously developed design rules for optimal filters to be implemented on constrained-modulation devices. Thus the nonlinear preprocessing becomes necessary for correlation in optical systems with current spatial light modulator technology. These results are illustrated with computer simulations of images with signal-dependent noise correlated with binary-phase-only filters and ternary-phase-amplitude filters.

  20. An ICA-based method for the identification of optimal FMRI features and components using combined group-discriminative techniques

    PubMed Central

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey D.; Calhoun, Vince D.

    2013-01-01

    Extraction of relevant features from multitask functional MRI (fMRI) data in order to identify potential biomarkers for disease, is an attractive goal. In this paper, we introduce a novel feature-based framework, which is sensitive and accurate in detecting group differences (e.g. controls vs. patients) by proposing three key ideas. First, we integrate two goal-directed techniques: coefficient-constrained independent component analysis (CC-ICA) and principal component analysis with reference (PCA-R), both of which improve sensitivity to group differences. Secondly, an automated artifact-removal method is developed for selecting components of interest derived from CC-ICA, with an average accuracy of 91%. Finally, we propose a strategy for optimal feature/component selection, aiming to identify optimal group-discriminative brain networks as well as the tasks within which these circuits are engaged. The group-discriminating performance is evaluated on 15 fMRI feature combinations (5 single features and 10 joint features) collected from 28 healthy control subjects and 25 schizophrenia patients. Results show that a feature from a sensorimotor task and a joint feature from a Sternberg working memory (probe) task and an auditory oddball (target) task are the top two feature combinations distinguishing groups. We identified three optimal features that best separate patients from controls, including brain networks consisting of temporal lobe, default mode and occipital lobe circuits, which when grouped together provide improved capability in classifying group membership. The proposed framework provides a general approach for selecting optimal brain networks which may serve as potential biomarkers of several brain diseases and thus has wide applicability in the neuroimaging research community. PMID:19457398

  1. Constraining the atmosphere of GJ 1214b using an optimal estimation technique

    NASA Astrophysics Data System (ADS)

    Barstow, J. K.; Aigrain, S.; Irwin, P. G. J.; Fletcher, L. N.; Lee, J.-M.

    2013-09-01

    We explore cloudy, extended H2-He atmosphere scenarios for the warm super-Earth GJ 1214b using an optimal estimation retrieval technique. This planet, orbiting an M4.5 star only 13 pc from the Earth, is of particular interest because it lies between the Earth and Neptune in size and may be a member of a new class of planet that is neither terrestrial nor gas giant. Its relatively flat transmission spectrum has so far made atmospheric characterization difficult. The Non-linear optimal Estimator for MultivariateE spectral analySIS (NEMESIS) algorithm is used to explore the degenerate model parameter space for a cloudy, H2-He-dominated atmosphere scenario. Optimal estimation is a data-led approach that allows solutions beyond the range permitted by ab initio equilibrium model atmosphere calculations, and as such prevents restriction from prior expectations. We show that optimal estimation retrieval is a powerful tool for this kind of study, and present an exploration of the degenerate atmospheric scenarios for GJ 1214b. Whilst we find a family of solutions that provide a very good fit to the data, the quality and coverage of these data are insufficient for us to more precisely determine the abundances of cloud and trace gases given an H2-He atmosphere, and we also cannot rule out the possibility of a high molecular weight atmosphere. Future ground- and space-based observations will provide the opportunity to confirm or rule out an extended H2-He atmosphere, but more precise constraints will be limited by intrinsic degeneracies in the retrieval problem, such as variations in cloud top pressure and temperature.

  2. Using Geographic Information Systems and Spatial Analysis Methods to Assess Household Water Access and Sanitation Coverage in the SHINE Trial.

    PubMed

    Ntozini, Robert; Marks, Sara J; Mangwadu, Goldberg; Mbuya, Mduduzi N N; Gerema, Grace; Mutasa, Batsirai; Julian, Timothy R; Schwab, Kellogg J; Humphrey, Jean H; Zungu, Lindiwe I

    2015-12-15

    Access to water and sanitation are important determinants of behavioral responses to hygiene and sanitation interventions. We estimated cluster-specific water access and sanitation coverage to inform a constrained randomization technique in the SHINE trial. Technicians and engineers inspected all public access water sources to ascertain seasonality, function, and geospatial coordinates. Households and water sources were mapped using open-source geospatial software. The distance from each household to the nearest perennial, functional, protected water source was calculated, and for each cluster, the median distance and the proportion of households within <500 m and >1500 m of such a water source. Cluster-specific sanitation coverage was ascertained using a random sample of 13 households per cluster. These parameters were included as covariates in randomization to optimize balance in water and sanitation access across treatment arms at the start of the trial. The observed high variability between clusters in both parameters suggests that constraining on these factors was needed to reduce risk of bias. © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America.

  3. Analysis of Eddy Current Capabilities for the Detection of Outer Diameter Stress Corrosion Cracking in Small Bore Metallic Structures

    NASA Technical Reports Server (NTRS)

    Wincheski, Buzz; Williams, Phillip; Simpson, John

    2007-01-01

    The use of eddy current techniques for the detection of outer diameter damage in tubing and many complex aerospace structures often requires the use of an inner diameter probe due to a lack of access to the outside of the part. In small bore structures the probe size and orientation are constrained by the inner diameter of the part, complicating the optimization of the inspection technique. Detection of flaws through a significant remaining wall thickness becomes limited not only by the standard depth of penetration, but also geometrical aspects of the probe. Recently, an orthogonal eddy current probe was developed for detection of such flaws in Space Shuttle Primary Reaction Control System (PRCS) Thrusters. In this case, the detection of deeply buried stress corrosion cracking by an inner diameter eddy current probe was sought. Probe optimization was performed based upon the limiting spatial dimensions, flaw orientation, and required detection sensitivity. Analysis of the probe/flaw interaction was performed through the use of finite and boundary element modeling techniques. Experimental data for the flaw detection capabilities, including a probability of detection study, will be presented along with the simulation data. The results of this work have led to the successful deployment of an inspection system for the detection of stress corrosion cracking in Space Shuttle Primary Reaction Control System (PRCS) Thrusters.

  4. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  5. A TV-constrained decomposition method for spectral CT

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang

    2017-03-01

    Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es

    We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.

  7. Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DallAnese, Emiliano; Baker, Kyri; Summers, Tyler

    This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less

  8. Optimal lifting ascent trajectories for the space shuttle

    NASA Technical Reports Server (NTRS)

    Rau, T. R.; Elliott, J. R.

    1972-01-01

    The performance gains which are possible through the use of optimal trajectories for a particular space shuttle configuration are discussed. The spacecraft configurations and aerodynamic characteristics are described. Shuttle mission payload capability is examined with respect to the optimal orbit inclination for unconstrained, constrained, and nonlifting conditions. The effects of velocity loss and heating rate on the optimal ascent trajectory are investigated.

  9. Artifact reduction in short-scan CBCT by use of optimization-based reconstruction

    PubMed Central

    Zhang, Zheng; Han, Xiao; Pearson, Erik; Pelizzari, Charles; Sidky, Emil Y; Pan, Xiaochuan

    2017-01-01

    Increasing interest in optimization-based reconstruction in research on, and applications of, cone-beam computed tomography (CBCT) exists because it has been shown to have to potential to reduce artifacts observed in reconstructions obtained with the Feldkamp–Davis–Kress (FDK) algorithm (or its variants), which is used extensively for image reconstruction in current CBCT applications. In this work, we carried out a study on optimization-based reconstruction for possible reduction of artifacts in FDK reconstruction specifically from short-scan CBCT data. The investigation includes a set of optimization programs such as the image-total-variation (TV)-constrained data-divergency minimization, data-weighting matrices such as the Parker weighting matrix, and objects of practical interest for demonstrating and assessing the degree of artifact reduction. Results of investigative work reveal that appropriately designed optimization-based reconstruction, including the image-TV-constrained reconstruction, can reduce significant artifacts observed in FDK reconstruction in CBCT with a short-scan configuration. PMID:27046218

  10. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  11. Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Lee, Charles H.

    2012-01-01

    We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.

  12. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  13. Program Aids Analysis And Optimization Of Design

    NASA Technical Reports Server (NTRS)

    Rogers, James L., Jr.; Lamarsh, William J., II

    1994-01-01

    NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.

  14. Highly Constrained Bicyclic Scaffolds for the Discovery of Protease-Stable Peptides via mRNA Display.

    PubMed

    Hacker, David E; Hoinka, Jan; Iqbal, Emil S; Przytycka, Teresa M; Hartman, Matthew C T

    2017-03-17

    Highly constrained peptides such as the knotted peptide natural products are promising medicinal agents because of their impressive biostability and potent activity. Yet, libraries of highly constrained peptides are challenging to prepare. Here, we present a method which utilizes two robust, orthogonal chemical steps to create highly constrained bicyclic peptide libraries. This technology was optimized to be compatible with in vitro selections by mRNA display. We performed side-by-side monocyclic and bicyclic selections against a model protein (streptavidin). Both selections resulted in peptides with mid-nanomolar affinity, and the bicyclic selection yielded a peptide with remarkable protease resistance.

  15. Do Vascular Networks Branch Optimally or Randomly across Spatial Scales?

    PubMed Central

    Newberry, Mitchell G.; Savage, Van M.

    2016-01-01

    Modern models that derive allometric relationships between metabolic rate and body mass are based on the architectural design of the cardiovascular system and presume sibling vessels are symmetric in terms of radius, length, flow rate, and pressure. Here, we study the cardiovascular structure of the human head and torso and of a mouse lung based on three-dimensional images processed via our software Angicart. In contrast to modern allometric theories, we find systematic patterns of asymmetry in vascular branching, potentially explaining previously documented mismatches between predictions (power-law or concave curvature) and observed empirical data (convex curvature) for the allometric scaling of metabolic rate. To examine why these systematic asymmetries in vascular branching might arise, we construct a mathematical framework to derive predictions based on local, junction-level optimality principles that have been proposed to be favored in the course of natural selection and development. The two most commonly used principles are material-cost optimizations (construction materials or blood volume) and optimization of efficient flow via minimization of power loss. We show that material-cost optimization solutions match with distributions for asymmetric branching across the whole network but do not match well for individual junctions. Consequently, we also explore random branching that is constrained at scales that range from local (junction-level) to global (whole network). We find that material-cost optimizations are the strongest predictor of vascular branching in the human head and torso, whereas locally or intermediately constrained random branching is comparable to material-cost optimizations for the mouse lung. These differences could be attributable to developmentally-programmed local branching for larger vessels and constrained random branching for smaller vessels. PMID:27902691

  16. Design of 2D time-varying vector fields.

    PubMed

    Chen, Guoning; Kwatra, Vivek; Wei, Li-Yi; Hansen, Charles D; Zhang, Eugene

    2012-10-01

    Design of time-varying vector fields, i.e., vector fields that can change over time, has a wide variety of important applications in computer graphics. Existing vector field design techniques do not address time-varying vector fields. In this paper, we present a framework for the design of time-varying vector fields, both for planar domains as well as manifold surfaces. Our system supports the creation and modification of various time-varying vector fields with desired spatial and temporal characteristics through several design metaphors, including streamlines, pathlines, singularity paths, and bifurcations. These design metaphors are integrated into an element-based design to generate the time-varying vector fields via a sequence of basis field summations or spatial constrained optimizations at the sampled times. The key-frame design and field deformation are also introduced to support other user design scenarios. Accordingly, a spatial-temporal constrained optimization and the time-varying transformation are employed to generate the desired fields for these two design scenarios, respectively. We apply the time-varying vector fields generated using our design system to a number of important computer graphics applications that require controllable dynamic effects, such as evolving surface appearance, dynamic scene design, steerable crowd movement, and painterly animation. Many of these are difficult or impossible to achieve via prior simulation-based methods. In these applications, the time-varying vector fields have been applied as either orientation fields or advection fields to control the instantaneous appearance or evolving trajectories of the dynamic effects.

  17. Constraining the braneworld with gravitational wave observations.

    PubMed

    McWilliams, Sean T

    2010-04-09

    Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, l, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining l via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain l at the approximately 1 microm level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of l < or = 5 microm.

  18. Constraining the Braneworld with Gravitational Wave Observations

    NASA Technical Reports Server (NTRS)

    McWilliams, Sean T.

    2011-01-01

    Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, L, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining L via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain L at the approximately 1 micron level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of L less than or equal to 5 microns.

  19. A hierarchical transition state search algorithm

    NASA Astrophysics Data System (ADS)

    del Campo, Jorge M.; Köster, Andreas M.

    2008-07-01

    A hierarchical transition state search algorithm is developed and its implementation in the density functional theory program deMon2k is described. This search algorithm combines the double ended saddle interpolation method with local uphill trust region optimization. A new formalism for the incorporation of the distance constrain in the saddle interpolation method is derived. The similarities between the constrained optimizations in the local trust region method and the saddle interpolation are highlighted. The saddle interpolation and local uphill trust region optimizations are validated on a test set of 28 representative reactions. The hierarchical transition state search algorithm is applied to an intramolecular Diels-Alder reaction with several internal rotors, which makes automatic transition state search rather challenging. The obtained reaction mechanism is discussed in the context of the experimentally observed product distribution.

  20. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing.

    PubMed

    Lin, Yu-Hsiu; Hu, Yu-Chen

    2018-04-27

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved by 13.97%.

  1. Fast alternating projection methods for constrained tomographic reconstruction

    PubMed Central

    Liu, Li; Han, Yongxin

    2017-01-01

    The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298

  2. Conformationally constrained farnesoid X receptor (FXR) agonists: alternative replacements of the stilbene.

    PubMed

    Akwabi-Ameyaw, Adwoa; Caravella, Justin A; Chen, Lihong; Creech, Katrina L; Deaton, David N; Madauss, Kevin P; Marr, Harry B; Miller, Aaron B; Navas, Frank; Parks, Derek J; Spearing, Paul K; Todd, Dan; Williams, Shawn P; Wisely, G Bruce

    2011-10-15

    To further explore the optimum placement of the acid moiety in conformationally constrained analogs of GW 4064 1a, a series of stilbene replacements were prepared. The benzothiophene 1f and the indole 1g display the optimal orientation of the carboxylate for enhanced FXR agonist potency. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Updated Magmatic Flux Rate Estimates for the Hawaii Plume

    NASA Astrophysics Data System (ADS)

    Wessel, P.

    2013-12-01

    Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.

  4. On optimal strategies in event-constrained differential games

    NASA Technical Reports Server (NTRS)

    Heymann, M.; Rajan, N.; Ardema, M.

    1985-01-01

    Combat games are formulated as zero-sum differential games with unilateral event constraints. An interior penalty function approach is employed to approximate optimal strategies for the players. The method is very attractive computationally and possesses suitable approximation and convergence properties.

  5. Constraining gross primary production and ecosystem respiration estimates for North America using atmospheric observations of carbonyl sulfide (OCS) and CO2

    NASA Astrophysics Data System (ADS)

    He, W.; Ju, W.; Chen, H.; Peters, W.; van der Velde, I.; Baker, I. T.; Andrews, A. E.; Zhang, Y.; Launois, T.; Campbell, J. E.; Suntharalingam, P.; Montzka, S. A.

    2016-12-01

    Carbonyl sulfide (OCS) is a promising novel atmospheric tracer for studying carbon cycle processes. OCS shares a similar pathway as CO2 during photosynthesis but not released through a respiration-like process, thus could be used to partition Gross Primary Production (GPP) from Net Ecosystem-atmosphere CO2 Exchange (NEE). This study uses joint atmospheric observations of OCS and CO2 to constrain GPP and ecosystem respiration (Re). Flask data from tower and aircraft sites over North America are collected. We employ our recently developed CarbonTracker (CT)-Lagrange carbon assimilation system, which is based on the CT framework and the Weather Research and Forecasting - Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) model, and the Simple Biosphere model with simulated OCS (SiB3-OCS) that provides prior GPP, Re and plant uptake fluxes of OCS. Derived plant OCS fluxes from both process model and GPP-scaled model are tested in our inversion. To investigate the ability of OCS to constrain GPP and understand the uncertainty propagated from OCS modeling errors to constrained fluxes in a dual-tracer system including OCS and CO2, two inversion schemes are implemented and compared: (1) a two-step scheme, which firstly optimizes GPP using OCS observations, and then simultaneously optimizes GPP and Re using CO2 observations with OCS-constrained GPP in the first step as prior; (2) a joint scheme, which simultaneously optimizes GPP and Re using OCS and CO2 observations. We will evaluate the result using an estimated GPP from space-borne solar-induced fluorescence observations and a data-driven GPP upscaled from FLUXNET data with a statistical model (Jung et al., 2011). Preliminary result for the year 2010 shows the joint inversion makes simulated mole fractions more consistent with observations for both OCS and CO2. However, the uncertainty of OCS simulation is larger than that of CO2. The two-step and joint schemes perform similarly in improving the consistence with observations for OCS, implicating that OCS could provide independent constraint in joint inversion. Optimization makes less total GPP and Re but more NEE, when testing with prior CO2 fluxes from two biosphere models. This study gives an in-depth insight into the role of joint atmospheric OCS and CO2 observations in constraining CO2 fluxes.

  6. Constrained spectral clustering under a local proximity structure assumption

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Xu, Qianjun; des Jardins, Marie

    2005-01-01

    This work focuses on incorporating pairwise constraints into a spectral clustering algorithm. A new constrained spectral clustering method is proposed, as well as an active constraint acquisition technique and a heuristic for parameter selection. We demonstrate that our constrained spectral clustering method, CSC, works well when the data exhibits what we term local proximity structure.

  7. Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites

    NASA Astrophysics Data System (ADS)

    Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin

    2015-11-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  8. Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites

    NASA Astrophysics Data System (ADS)

    Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.

    2015-12-01

    The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.

  9. Optimization of shared autonomy vehicle control architectures for swarm operations.

    PubMed

    Sengstacken, Aaron J; DeLaurentis, Daniel A; Akbarzadeh-T, Mohammad R

    2010-08-01

    The need for greater capacity in automotive transportation (in the midst of constrained resources) and the convergence of key technologies from multiple domains may eventually produce the emergence of a "swarm" concept of operations. The swarm, which is a collection of vehicles traveling at high speeds and in close proximity, will require technology and management techniques to ensure safe, efficient, and reliable vehicle interactions. We propose a shared autonomy control approach, in which the strengths of both human drivers and machines are employed in concert for this management. Building from a fuzzy logic control implementation, optimal architectures for shared autonomy addressing differing classes of drivers (represented by the driver's response time) are developed through a genetic-algorithm-based search for preferred fuzzy rules. Additionally, a form of "phase transition" from a safe to an unsafe swarm architecture as the amount of sensor capability is varied uncovers key insights on the required technology to enable successful shared autonomy for swarm operations.

  10. Live biospeckle laser imaging of root tissues.

    PubMed

    Braga, Roberto A; Dupuy, L; Pasqual, M; Cardoso, R R

    2009-06-01

    Live imaging is now a central component for the study of plant developmental processes. Currently, most techniques are extremely constraining: they rely on the marking of specific cellular structures which generally apply to model species because they require genetic transformations. The biospeckle laser (BSL) system was evaluated as an instrument to measure biological activity in plant tissues. The system allows collecting biospeckle patterns from roots which are grown in gels. Laser illumination has been optimized to obtain the images without undesirable specular reflections from the glass tube. Data on two different plant species were obtained and the ability of three different methods to analyze the biospeckle patterns are presented. The results showed that the biospeckle could provide quantitative indicators of the molecular activity from roots which are grown in gel substrate in tissue culture. We also presented a particular experimental configuration and the optimal approach to analyze the images. This may serve as a basis to further works on live BSL in order to study root development.

  11. Inverse design of near unity efficiency perfectly vertical grating couplers.

    PubMed

    Michaels, Andrew; Yablonovitch, Eli

    2018-02-19

    Efficient coupling between integrated optical waveguides and optical fibers is essential to the success of silicon photonics. While many solutions exist, perfectly vertical grating couplers that scatter light out of a waveguide in the direction normal to the waveguide's top surface are an ideal candidate due to their potential to reduce packaging complexity. Designing such couplers with high efficiencies, however, has proven difficult. In this paper, we use inverse electromagnetic design techniques to optimize a high efficiency two-layer perfectly vertical silicon grating coupler. Our base design achieves a chip-to-fiber coupling efficiency of 99.2% (-0.035 dB) at 1550 nm. Using this base design as a starting point, we run subsequent constrained optimizations to realize vertical couplers with coupling efficiencies over 96% and back reflections of less than -40 dB which can be fabricated using 65 nm-resolution lithography. These results demonstrate a new path forward for designing fabrication-tolerant ultra high efficiency grating couplers.

  12. Numerical Solution of the Electron Heat Transport Equation and Physics-Constrained Modeling of the Thermal Conductivity via Sequential Quadratic Programming Optimization in Nuclear Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Paloma, Cynthia S.

    The plasma electron temperature (Te) plays a critical role in a tokamak nu- clear fusion reactor since temperatures on the order of 108K are required to achieve fusion conditions. Many plasma properties in a tokamak nuclear fusion reactor are modeled by partial differential equations (PDE's) because they depend not only on time but also on space. In particular, the dynamics of the electron temperature is governed by a PDE referred to as the Electron Heat Transport Equation (EHTE). In this work, a numerical method is developed to solve the EHTE based on a custom finite-difference technique. The solution of the EHTE is compared to temperature profiles obtained by using TRANSP, a sophisticated plasma transport code, for specific discharges from the DIII-D tokamak, located at the DIII-D National Fusion Facility in San Diego, CA. The thermal conductivity (also called thermal diffusivity) of the electrons (Xe) is a plasma parameter that plays a critical role in the EHTE since it indicates how the electron temperature diffusion varies across the minor effective radius of the tokamak. TRANSP approximates Xe through a curve-fitting technique to match experimentally measured electron temperature profiles. While complex physics-based model have been proposed for Xe, there is a lack of a simple mathematical model for the thermal diffusivity that could be used for control design. In this work, a model for Xe is proposed based on a scaling law involving key plasma variables such as the electron temperature (Te), the electron density (ne), and the safety factor (q). An optimization algorithm is developed based on the Sequential Quadratic Programming (SQP) technique to optimize the scaling factors appearing in the proposed model so that the predicted electron temperature and magnetic flux profiles match predefined target profiles in the best possible way. A simulation study summarizing the outcomes of the optimization procedure is presented to illustrate the potential of the proposed modeling method.

  13. Assessing and optimizing infrasound network performance: application to remote volcano monitoring

    NASA Astrophysics Data System (ADS)

    Tailpied, D.; LE Pichon, A.; Marchetti, E.; Kallel, M.; Ceranna, L.

    2014-12-01

    Infrasound is an efficient monitoring technique to remotely detect and characterize explosive sources such as volcanoes. Simulation methods incorporating realistic source and propagation effects have been developed to quantify the detection capability of any network. These methods can also be used to optimize the network configuration (number of stations, geographical location) in order to reduce the detection thresholds taking into account seasonal effects in infrasound propagation. Recent studies have shown that remote infrasound observations can provide useful information about the eruption chronology and the released acoustic energy. Comparisons with near-field recordings allow evaluating the potential of these observations to better constrain source parameters when other monitoring techniques (satellite, seismic, gas) are not available or cannot be made. Because of its regular activity, the well-instrumented Mount Etna is in Europe a unique natural repetitive source to test and optimize detection and simulation methods. The closest infrasound station part of the International Monitoring System is located in Tunisia (IS48). In summer, during the downwind season, it allows an unambiguous identification of signals associated with Etna eruptions. Under the European ARISE project (Atmospheric dynamics InfraStructure in Europe, FP7/2007-2013), experimental arrays have been installed in order to characterize infrasound propagation in different ranges of distance and direction. In addition, a small-aperture array, set up on the flank by the University of Firenze, has been operating since 2007. Such an experimental setting offers an opportunity to address the societal benefits that can be achieved through routine infrasound monitoring.

  14. Active constrained clustering by examining spectral Eigenvectors

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri L.; desJardins, Marie; Xu, Qianjun

    2005-01-01

    This work focuses on the active selection of pairwise constraints for spectral clustering. We develop and analyze a technique for Active Constrained Clustering by Examining Spectral eigenvectorS (ACCESS) derived from a similarity matrix.

  15. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  16. Simulating carbon and water fluxes at Arctic and boreal ecosystems in Alaska by optimizing the modified BIOME-BGC with eddy covariance data

    NASA Astrophysics Data System (ADS)

    Ueyama, M.; Kondo, M.; Ichii, K.; Iwata, H.; Euskirchen, E. S.; Zona, D.; Rocha, A. V.; Harazono, Y.; Nakai, T.; Oechel, W. C.

    2013-12-01

    To better predict carbon and water cycles in Arctic ecosystems, we modified a process-based ecosystem model, BIOME-BGC, by introducing new processes: change in active layer depth on permafrost and phenology of tundra vegetation. The modified BIOME-BGC was optimized using an optimization method. The model was constrained using gross primary productivity (GPP) and net ecosystem exchange (NEE) at 23 eddy covariance sites in Alaska, and vegetation/soil carbon from a literature survey. The model was used to simulate regional carbon and water fluxes of Alaska from 1900 to 2011. Simulated regional fluxes were validated with upscaled GPP, ecosystem respiration (RE), and NEE based on two methods: (1) a machine learning technique and (2) a top-down model. Our initial simulation suggests that the original BIOME-BGC with default ecophysiological parameters substantially underestimated GPP and RE for tundra and overestimated those fluxes for boreal forests. We will discuss how optimization using the eddy covariance data impacts the historical simulation by comparing the new version of the model with simulated results from the original BIOME-BGC with default ecophysiological parameters. This suggests that the incorporation of the active layer depth and plant phenology processes is important to include when simulating carbon and water fluxes in Arctic ecosystems.

  17. Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation.

    PubMed

    Dmochowski, Jacek P; Koessler, Laurent; Norcia, Anthony M; Bikson, Marom; Parra, Lucas C

    2017-08-15

    To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4-7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Optimal use of EEG recordings to target active brain areas with transcranial electrical stimulation

    PubMed Central

    Dmochowski, Jacek P.; Koessler, Laurent; Norcia, Anthony M.; Bikson, Marom; Parra, Lucas C.

    2018-01-01

    To demonstrate causal relationships between brain and behavior, investigators would like to guide brain stimulation using measurements of neural activity. Particularly promising in this context are electroencephalography (EEG) and transcranial electrical stimulation (TES), as they are linked by a reciprocity principle which, despite being known for decades, has not led to a formalism for relating EEG recordings to optimal stimulation parameters. Here we derive a closed-form expression for the TES configuration that optimally stimulates (i.e., targets) the sources of recorded EEG, without making assumptions about source location or distribution. We also derive a duality between TES targeting and EEG source localization, and demonstrate that in cases where source localization fails, so does the proposed targeting. Numerical simulations with multiple head models confirm these theoretical predictions and quantify the achieved stimulation in terms of focality and intensity. We show that constraining the stimulation currents automatically selects optimal montages that involve only a few (4–7) electrodes, with only incremental loss in performance when targeting focal activations. The proposed technique allows brain scientists and clinicians to rationally target the sources of observed EEG and thus overcomes a major obstacle to the realization of individualized or closed-loop brain stimulation. PMID:28578130

  19. Inverse Regional Modeling with Adjoint-Free Technique

    NASA Astrophysics Data System (ADS)

    Yaremchuk, M.; Martin, P.; Panteleev, G.; Beattie, C.

    2016-02-01

    The ongoing parallelization trend in computer technologies facilitates the use ensemble methods in geophysical data assimilation. Of particular interest are ensemble techniques which do not require the development of tangent linear numerical models and their adjoints for optimization. These ``adjoint-free'' methods minimize the cost function within the sequence of subspaces spanned by a carefully chosen sets perturbations of the control variables. In this presentation, an adjoint-free variational technique (a4dVar) is demonstrated in an application estimating initial conditions of two numerical models: the Navy Coastal Ocean Model (NCOM), and the surface wave model (WAM). With the NCOM, performance of both adjoint and adjoint-free 4dVar data assimilation techniques is compared in application to the hydrographic surveys and velocity observations collected in the Adriatic Sea in 2006. Numerical experiments have shown that a4dVar is capable of providing forecast skill similar to that of conventional 4dVar at comparable computational expense while being less susceptible to excitation of ageostrophic modes that are not supported by observations. Adjoint-free technique constrained by the WAM model is tested in a series of data assimilation experiments with synthetic observations in the southern Chukchi Sea. The types of considered observations are directional spectra estimated from point measurements by stationary buoys, significant wave height (SWH) observations by coastal high-frequency radars and along-track SWH observations by satellite altimeters. The a4dVar forecast skill is shown to be 30-40% better than the skill of the sequential assimilaiton method based on optimal interpolation which is currently used in operations. Prospects of further development of the a4dVar methods in regional applications are discussed.

  20. PI controller design of a wind turbine: evaluation of the pole-placement method and tuning using constrained optimization

    NASA Astrophysics Data System (ADS)

    Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.

    2016-09-01

    PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.

  1. Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.

    PubMed

    Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews

    2015-03-01

    This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.

  2. Reliable Thermoelectric Module Design under Opposing Requirements from Structural and Thermoelectric Considerations

    NASA Astrophysics Data System (ADS)

    Karri, Naveen K.; Mo, Changki

    2018-06-01

    Structural reliability of thermoelectric generation (TEG) systems still remains an issue, especially for applications such as large-scale industrial or automobile exhaust heat recovery, in which TEG systems are subject to dynamic loads and thermal cycling. Traditional thermoelectric (TE) system design and optimization techniques, focused on performance alone, could result in designs that may fail during operation as the geometric requirements for optimal performance (especially the power) are often in conflict with the requirements for mechanical reliability. This study focused on reducing the thermomechanical stresses in a TEG system without compromising the optimized system performance. Finite element simulations were carried out to study the effect of TE element (leg) geometry such as leg length and cross-sectional shape under constrained material volume requirements. Results indicated that the element length has a major influence on the element stresses whereas regular cross-sectional shapes have minor influence. The impact of TE element stresses on the mechanical reliability is evaluated using brittle material failure theory based on Weibull analysis. An alternate couple configuration that relies on the industry practice of redundant element design is investigated. Results showed that the alternate configuration considerably reduced the TE element and metallization stresses, thereby enhancing the structural reliability, with little trade-off in the optimized performance. The proposed alternate configuration could serve as a potential design modification for improving the reliability of systems optimized for thermoelectric performance.

  3. Robust Constrained Blackbox Optimization with Surrogates

    DTIC Science & Technology

    2015-05-21

    algorithms with OPAL . Mathematical Programming Computation, 6(3):233–254, 2014. 6. M.S. Ouali, H. Aoudjit, and C. Audet. Replacement scheduling of a fleet of...Orban. Optimization of Algorithms with OPAL . Mathematical Programming Computation, 6(3), 233-254, September 2014. DISTRIBUTION A: Distribution

  4. Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs

    NASA Astrophysics Data System (ADS)

    Chitsazan, N.; Tsai, F. T.

    2012-12-01

    Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.

  5. Optimal Control of Evolution Mixed Variational Inclusions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alduncin, Gonzalo, E-mail: alduncin@geofisica.unam.mx

    2013-12-15

    Optimal control problems of primal and dual evolution mixed variational inclusions, in reflexive Banach spaces, are studied. The solvability analysis of the mixed state systems is established via duality principles. The optimality analysis is performed in terms of perturbation conjugate duality methods, and proximation penalty-duality algorithms to mixed optimality conditions are further presented. Applications to nonlinear diffusion constrained problems as well as quasistatic elastoviscoplastic bilateral contact problems exemplify the theory.

  6. Groundwater management under uncertainty using a stochastic multi-cell model

    NASA Astrophysics Data System (ADS)

    Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.

    2017-08-01

    The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.

  7. A robust, efficient equidistribution 2D grid generation method

    NASA Astrophysics Data System (ADS)

    Chacon, Luis; Delzanno, Gian Luca; Finn, John; Chung, Jeojin; Lapenta, Giovanni

    2007-11-01

    We present a new cell-area equidistribution method for two- dimensional grid adaptation [1]. The method is able to satisfy the equidistribution constraint to arbitrary precision while optimizing desired grid properties (such as isotropy and smoothness). The method is based on the minimization of the grid smoothness integral, constrained to producing a given positive-definite cell volume distribution. The procedure gives rise to a single, non-linear scalar equation with no free-parameters. We solve this equation numerically with the Newton-Krylov technique. The ellipticity property of the linearized scalar equation allows multigrid preconditioning techniques to be effectively used. We demonstrate a solution exists and is unique. Therefore, once the solution is found, the adapted grid cannot be folded due to the positivity of the constraint on the cell volumes. We present several challenging tests to show that our new method produces optimal grids in which the constraint is satisfied numerically to arbitrary precision. We also compare the new method to the deformation method [2] and show that our new method produces better quality grids. [1] G.L. Delzanno, L. Chac'on, J.M. Finn, Y. Chung, G. Lapenta, A new, robust equidistribution method for two-dimensional grid generation, in preparation. [2] G. Liao and D. Anderson, A new approach to grid generation, Appl. Anal. 44, 285--297 (1992).

  8. Number-unconstrained quantum sensing

    NASA Astrophysics Data System (ADS)

    Mitchell, Morgan W.

    2017-12-01

    Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.

  9. Women in Transition: Experiences of Health and Health Care for Recently Incarcerated Women Living in Community Corrections Facilities.

    PubMed

    Colbert, Alison M; Goshin, Lorie S; Durand, Vanessa; Zoucha, Rick; Sekula, L Kathleen

    2016-12-01

    Health priorities of women after incarceration remain poorly understood, constraining development of interventions targeted at their health during that time. We explored the experience of health and health care after incarceration in a focused ethnography of 28 women who had been released from prison or jail within the past year and were living in community corrections facilities. The women's outlook on health was rooted in a newfound core optimism, but this was constrained by their pressing health-related issues; stress and uncertainty; and the pressures of the criminal justice system. These external forces threatened to cause collapse of women's core optimism. Findings support interventions that capitalize on women's optimism and address barriers specific to criminal justice contexts. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. SU-E-T-574: Novel Chance-Constrained Optimization in Intensity-Modulated Proton Therapy Planning to Account for Range and Patient Setup Uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Y; Liang, J; Liu, W

    2015-06-15

    Purpose: We propose to apply a probabilistic framework, namely chanceconstrained optimization, in the intensity-modulated proton therapy (IMPT) planning subject to range and patient setup uncertainties. The purpose is to hedge against the influence of uncertainties and improve robustness of treatment plans. Methods: IMPT plans were generated for a typical prostate patient. Nine dose distributions are computed — the nominal one and one each for ±5mm setup uncertainties along three cardinal axes and for ±3.5% range uncertainty. These nine dose distributions are supplied to the solver CPLEX as chance constraints to explicitly control plan robustness under these representative uncertainty scenarios withmore » certain probability. This probability is determined by the tolerance level. We make the chance-constrained model tractable by converting it to a mixed integer optimization problem. The quality of plans derived from this method is evaluated using dose-volume histogram (DVH) indices such as tumor dose homogeneity (D5% – D95%) and coverage (D95%) and normal tissue sparing like V70 of rectum, V65, and V40 of bladder. We also compare the results from this novel method with the conventional PTV-based method to further demonstrate its effectiveness Results: Our model can yield clinically acceptable plans within 50 seconds. The chance-constrained optimization produces IMPT plans with comparable target coverage, better target dose homogeneity, and better normal tissue sparing compared to the PTV-based optimization [D95% CTV: 67.9 vs 68.7 (Gy), D5% – D95% CTV: 11.9 vs 18 (Gy), V70 rectum: 0.0 % vs 0.33%, V65 bladder: 2.17% vs 9.33%, V40 bladder: 8.83% vs 21.83%]. It also simultaneously makes the plan more robust [Width of DVH band at D50%: 2.0 vs 10.0 (Gy)]. The tolerance level may be varied to control the tradeoff between plan robustness and quality. Conclusion: The chance-constrained optimization generates superior IMPT plan compared to the PTV-based optimization with explicit control of plan robustness. NIH/NCI K25CA168984, Eagles Cancer Research Career Development, The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, Mayo ASU Seed Grant, and The Kemper Marley Foundation.« less

  11. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Treesearch

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  12. Taking the Pulse of Plants

    NASA Astrophysics Data System (ADS)

    Jensen, Kaare H.; Beecher, Sierra; Holbrook, N. Michele; Knoblauch, Michael

    2014-11-01

    Many biological systems use complex networks of vascular conduits to distribute energy over great distances. Examples include sugar transport in the phloem tissue of vascular plants and cytoplasmic streaming in some slime molds. Detailed knowledge of transport patterns in these systems is important for our fundamental understanding of energy distribution during development and for engineering of more efficient crops. Current techniques for quantifying transport in these microfluidic systems, however, only allow for the determination of either the flow speed or the concentration of material. Here we demonstrate a new method, based on confocal microscopy, which allows us to simultaneously determine velocity and solute concentration by tracking the dispersion of a tracer dye. We attempt to rationalize the observed transport patterns through consideration of constrained optimization problems.

  13. Experimental design approach to the process parameter optimization for laser welding of martensitic stainless steels in a constrained overlap configuration

    NASA Astrophysics Data System (ADS)

    Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.

    2011-02-01

    This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.

  14. Hybrid Stochastic Search Technique based Suboptimal AGC Regulator Design for Power System using Constrained Feedback Control Strategy

    NASA Astrophysics Data System (ADS)

    Ibraheem, Omveer, Hasan, N.

    2010-10-01

    A new hybrid stochastic search technique is proposed to design of suboptimal AGC regulator for a two area interconnected non reheat thermal power system incorporating DC link in parallel with AC tie-line. In this technique, we are proposing the hybrid form of Genetic Algorithm (GA) and simulated annealing (SA) based regulator. GASA has been successfully applied to constrained feedback control problems where other PI based techniques have often failed. The main idea in this scheme is to seek a feasible PI based suboptimal solution at each sampling time. The feasible solution decreases the cost function rather than minimizing the cost function.

  15. Evolutionary optimality applied to Drosophila experiments: hypothesis of constrained reproductive efficiency.

    PubMed

    Novoseltsev, V N; Arking, R; Novoseltseva, J A; Yashin, A I

    2002-06-01

    The general purpose of the paper is to test evolutionary optimality theories with experimental data on reproduction, energy consumption, and longevity in a particular Drosophila genotype. We describe the resource allocation in Drosophila females in terms of the oxygen consumption rates devoted to reproduction and to maintenance. The maximum ratio of the component spent on reproduction to the total rate of oxygen consumption, which can be realized by the female reproductive machinery, is called metabolic reproductive efficiency (MRE). We regard MRE as an evolutionary constraint. We demonstrate that MRE may be evaluated for a particular Drosophila phenotype given the fecundity pattern, the age-related pattern of oxygen consumption rate, and the longevity. We use a homeostatic model of aging to simulate a life history of a representative female fly, which describes the control strain in the long-term experiments with the Wayne State Drosophila genotype. We evaluate the theoretically optimal trade-offs in this genotype. Then we apply the Van Noordwijk-de Jong resource acquisition and allocation model, Kirkwood's disposable soma theory. and the Partridge-Barton optimality approach to test if the experimentally observed trade-offs may be regarded as close to the theoretically optimal ones. We demonstrate that the two approaches by Partridge-Barton and Kirkwood allow a positive answer to the question, whereas the Van Noordwijk-de Jong approach may be used to illustrate the optimality. We discuss the prospects of applying the proposed technique to various Drosophila experiments, in particular those including manipulations affecting fecundity.

  16. Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation

    NASA Astrophysics Data System (ADS)

    Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.

    2017-06-01

    Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.

  17. An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1995-01-01

    This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.

  18. Optimization of constrained density functional theory

    NASA Astrophysics Data System (ADS)

    O'Regan, David D.; Teobaldi, Gilberto

    2016-07-01

    Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.

  19. Climate change in fish: effects of respiratory constraints on optimal life history and behaviour.

    PubMed

    Holt, Rebecca E; Jørgensen, Christian

    2015-02-01

    The difference between maximum metabolic rate and standard metabolic rate is referred to as aerobic scope, and because it constrains performance it is suggested to constitute a key limiting process prescribing how fish may cope with or adapt to climate warming. We use an evolutionary bioenergetics model for Atlantic cod (Gadus morhua) to predict optimal life histories and behaviours at different temperatures. The model assumes common trade-offs and predicts that optimal temperatures for growth and fitness lie below that for aerobic scope; aerobic scope is thus a poor predictor of fitness at high temperatures. Initially, warming expands aerobic scope, allowing for faster growth and increased reproduction. Beyond the optimal temperature for fitness, increased metabolic requirements intensify foraging and reduce survival; oxygen budgeting conflicts thus constrain successful completion of the life cycle. The model illustrates how physiological adaptations are part of a suite of traits that have coevolved. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  20. A formulation and analysis of combat games

    NASA Technical Reports Server (NTRS)

    Heymann, M.; Ardema, M. D.; Rajan, N.

    1984-01-01

    Combat which is formulated as a dynamical encounter between two opponents, each of whom has offensive capabilities and objectives is outlined. A target set is associated with each opponent in the event space in which he endeavors to terminate the combat, thereby winning. If the combat terminates in both target sets simultaneously, or in neither, a joint capture or a draw, respectively, occurs. Resolution of the encounter is formulated as a combat game; as a pair of competing event constrained differential games. If exactly one of the players can win, the optimal strategies are determined from a resulting constrained zero sum differential game. Otherwise the optimal strategies are computed from a resulting nonzero sum game. Since optimal combat strategies may frequently not exist, approximate or delta combat games are also formulated leading to approximate or delta optimal strategies. The turret game is used to illustrate combat games. This game is sufficiently complex to exhibit a rich variety of combat behavior, much of which is not found in pursuit evasion games.

  1. Optimization of beam orientation in radiotherapy using planar geometry

    NASA Astrophysics Data System (ADS)

    Haas, O. C. L.; Burnham, K. J.; Mills, J. A.

    1998-08-01

    This paper proposes a new geometrical formulation of the coplanar beam orientation problem combined with a hybrid multiobjective genetic algorithm. The approach is demonstrated by optimizing the beam orientation in two dimensions, with the objectives being formulated using planar geometry. The traditional formulation of the objectives associated with the organs at risk has been modified to account for the use of complex dose delivery techniques such as beam intensity modulation. The new algorithm attempts to replicate the approach of a treatment planner whilst reducing the amount of computation required. Hybrid genetic search operators have been developed to improve the performance of the genetic algorithm by exploiting problem-specific features. The multiobjective genetic algorithm is formulated around the concept of Pareto optimality which enables the algorithm to search in parallel for different objectives. When the approach is applied without constraining the number of beams, the solution produces an indication of the minimum number of beams required. It is also possible to obtain non-dominated solutions for various numbers of beams, thereby giving the clinicians a choice in terms of the number of beams as well as in the orientation of these beams.

  2. Optimization of Dynamic Aperture of PEP-X Baseline Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Min-Huey; /SLAC; Cai, Yunhai

    2010-08-23

    SLAC is developing a long-range plan to transfer the evolving scientific programs at SSRL from the SPEAR3 light source to a much higher performing photon source. Storage ring design is one of the possibilities that would be housed in the 2.2-km PEP-II tunnel. The design goal of PEPX storage ring is to approach an optimal light source design with horizontal emittance less than 100 pm and vertical emittance of 8 pm to reach the diffraction limit of 1-{angstrom} x-ray. The low emittance design requires a lattice with strong focusing leading to high natural chromaticity and therefore to strong sextupoles. Themore » latter caused reduction of dynamic aperture. The dynamic aperture requirement for horizontal injection at injection point is about 10 mm. In order to achieve the desired dynamic aperture the transverse non-linearity of PEP-X is studied. The program LEGO is used to simulate the particle motion. The technique of frequency map is used to analyze the nonlinear behavior. The effect of the non-linearity is tried to minimize at the given constrains of limited space. The details and results of dynamic aperture optimization are discussed in this paper.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, J; Lindsay, P; University of Toronto, Toronto

    Purpose: Recent progress in small animal radiotherapy systems has provided the foundation for delivering the heterogeneous, millimeter scale dose distributions demanded by preclinical radiobiology investigations. Despite advances in preclinical dose planning, delivery of highly heterogeneous dose distributions is constrained by the fixed collimation systems and large x-ray focal spot common in small animal radiotherapy systems. This work proposes a dual focal spot dose optimization and delivery method with a large x-ray focal spot used to deliver homogeneous dose regions and a small focal spot to paint spatially heterogeneous dose regions. Methods: Two-dimensional dose kernels were measured for a 1 mmmore » circular collimator with radiochromic film at 10 mm depth in a solid water phantom for the small and large x-ray focal spots on a recently developed small animal microirradiator. These kernels were used in an optimization framework which segmented a desired dose distribution into low- and high-spatial frequency regions for delivery by the large and small focal spot, respectively. For each region, the method determined an optimal set of stage positions and beam-on times. The method was demonstrated by optimizing a bullseye pattern consisting of 0.75 mm radius circular target and 0.5 and 1.0 mm wide rings alternating between 0 and 2 Gy. Results: Compared to a large focal spot technique, the dual focal spot technique improved the optimized dose distribution: 69.2% of the optimized dose was within 0.5 Gy of the intended dose for the large focal spot, compared to 80.6% for the dual focal spot method. The dual focal spot design required 14.0 minutes of optimization, and will require 178.3 minutes for automated delivery. Conclusion: The dual focal spot optimization and delivery framework is a novel option for delivering conformal and heterogeneous dose distributions at the preclinical level and provides a new experimental option for unique radiobiological investigations. Funding Support: this work is supported by funding the National Sciences and Engineering Research Council of Canada, and a Mitacs-accelerate fellowship. Conflict of Interest: Dr. Lindsay and Dr. Jaffray are listed as inventors of the small animal microirradiator described herein. This system has been licensed for commercial development.« less

  4. Advanced Nuclear Fuel Cycle Transitions: Optimization, Modeling Choices, and Disruptions

    NASA Astrophysics Data System (ADS)

    Carlsen, Robert W.

    Many nuclear fuel cycle simulators have evolved over time to help understan the nuclear industry/ecosystem at a macroscopic level. Cyclus is one of th first fuel cycle simulators to accommodate larger-scale analysis with it liberal open-source licensing and first-class Linux support. Cyclus also ha features that uniquely enable investigating the effects of modeling choices o fuel cycle simulators and scenarios. This work is divided into thre experiments focusing on optimization, effects of modeling choices, and fue cycle uncertainty. Effective optimization techniques are developed for automatically determinin desirable facility deployment schedules with Cyclus. A novel method fo mapping optimization variables to deployment schedules is developed. Thi allows relationships between reactor types and scenario constraints to b represented implicitly in the variable definitions enabling the usage o optimizers lacking constraint support. It also prevents wasting computationa resources evaluating infeasible deployment schedules. Deployed power capacit over time and deployment of non-reactor facilities are also included a optimization variables There are many fuel cycle simulators built with different combinations o modeling choices. Comparing results between them is often difficult. Cyclus flexibility allows comparing effects of many such modeling choices. Reacto refueling cycle synchronization and inter-facility competition among othe effects are compared in four cases each using combinations of fleet of individually modeled reactors with 1-month or 3-month time steps. There are noticeable differences in results for the different cases. The larges differences occur during periods of constrained reactor fuel availability This and similar work can help improve the quality of fuel cycle analysi generally There is significant uncertainty associated deploying new nuclear technologie such as time-frames for technology availability and the cost of buildin advanced reactors. Historically, fuel cycle analysis has focused on answerin questions of fuel cycle feasibility and optimality. However, there has no been much work done to address uncertainty in fuel cycle analysis helpin answer questions of fuel cycle robustness. This work develops an demonstrates a methodology for evaluating deployment strategies whil accounting for uncertainty. Techniques are developed for measuring th hedging properties of deployment strategies under uncertainty. Additionally methods for using optimization to automatically find good hedging strategie are demonstrated.

  5. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less

  6. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    DOE PAGES

    Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.; ...

    2017-07-26

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less

  7. Leveraging 35 years of Pinus taeda research in the southeastern US to constrain forest carbon cycle predictions: regional data assimilation using ecosystem experiments

    NASA Astrophysics Data System (ADS)

    Quinn Thomas, R.; Brooks, Evan B.; Jersild, Annika L.; Ward, Eric J.; Wynne, Randolph H.; Albaugh, Timothy J.; Dinon-Aldridge, Heather; Burkhart, Harold E.; Domec, Jean-Christophe; Fox, Thomas R.; Gonzalez-Benecke, Carlos A.; Martin, Timothy A.; Noormets, Asko; Sampson, David A.; Teskey, Robert O.

    2017-07-01

    Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model-data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions, DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 105 km2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.

  8. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  9. Computer object segmentation by nonlinear image enhancement, multidimensional clustering, and geometrically constrained contour optimization

    NASA Astrophysics Data System (ADS)

    Bruynooghe, Michel M.

    1998-04-01

    In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.

  10. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  11. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  12. SU-E-I-23: A General KV Constrained Optimization of CNR for CT Abdominal Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weir, V; Zhang, J

    Purpose: While Tube current modulation has been well accepted for CT dose reduction, kV adjusting in clinical settings is still at its early stage. This is mainly due to the limited kV options of most current CT scanners. kV adjusting can potentially reduce radiation dose and optimize image quality. This study is to optimize CT abdomen imaging acquisition based on the assumption of a continuous kV, with the goal to provide the best contrast to noise ratio (CNR). Methods: For a given dose (CTDIvol) level, the CNRs at different kV and pitches were measured with an ACR GAMMEX phantom. Themore » phantom was scanned in a Siemens Sensation 64 scanner and a GE VCT 64 scanner. A constrained mathematical optimization was used to find the kV which led to the highest CNR for the anatomy and pitch setting. Parametric equations were obtained from polynomial fitting of plots of kVs vs CNRs. A suitable constraint region for optimization was chosen. Subsequent optimization yielded a peak CNR at a particular kV for different collimations and pitch setting. Results: The constrained mathematical optimization approach yields kV of 114.83 and 113.46, with CNRs of 1.27 and 1.11 at the pitch of 1.2 and 1.4, respectively, for the Siemens Sensation 64 scanner with the collimation of 32 x 0.625mm. An optimized kV of 134.25 and 1.51 CNR is obtained for a GE VCT 64 slice scanner with a collimation of 32 x 0.625mm and a pitch of 0.969. At 0.516 pitch and 32 x 0.625 mm an optimized kV of 133.75 and a CNR of 1.14 was found for the GE VCT 64 slice scanner. Conclusion: CNR in CT image acquisition can be further optimized with a continuous kV option instead of current discrete or fixed kV settings. A continuous kV option is a key for individualized CT protocols.« less

  13. A New Understanding for the Rain Rate retrieval of Attenuating Radars Measurement

    NASA Astrophysics Data System (ADS)

    Koner, P.; Battaglia, A.; Simmer, C.

    2009-04-01

    The retrieval of rain rate from the attenuated radar (e.g. Cloud Profiling Radar on board of CloudSAT in orbit since June 2006) is a challenging problem. ĹEcuyer and Stephens [1] underlined this difficulty (for rain rates larger than 1.5 mm/h) and suggested the need of additional information (like path-integrated attenuations (PIA) derived from surface reference techniques or precipitation water path estimated from co-located passive microwave radiometer) to constrain the retrieval. It is generally discussed based on the optimal estimation theory that there are no solutions without constraining the problem in a case of visible attenuation because there is no enough information content to solve the problem. However, when the problem is constrained by the additional measurement of PIA, there is a reasonable solution. This raises the spontaneous question: Is all information enclosed in this additional measurement? This also contradicts with the information theory because one measurement can introduce only one degree of freedom in the retrieval. Why is one degree of freedom so important in the above problem? This question cannot be explained using the estimation and information theories of OEM. On the other hand, Koner and Drummond [2] argued that the OEM is basically a regularization method, where a-priori covariance is used as a stabilizer and the regularization strength is determined by the choices of the a-priori and error covariance matrices. The regularization is required for the reduction of the condition number of Jacobian, which drives the noise injection from the measurement and inversion spaces to the state space in an ill-posed inversion. In this work, the above mentioned question will be discussed based on the regularization theory, error mitigation and eigenvalue mathematics. References 1. L'Ecuyer TS and Stephens G. An estimation based precipitation retrieval algorithm for attenuating radar. J. Appl. Met., 2002, 41, 272-85. 2. Koner PK, Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals. JQSRT 2008; 109:514-26.

  14. Optimality conditions for the numerical solution of optimization problems with PDE constraints :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro; Ridzal, Denis

    2014-03-01

    A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.

  15. CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.

    PubMed

    Zahery, Mahsa; Maes, Hermine H; Neale, Michael C

    2017-08-01

    We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.

  16. Constrained Bayesian Active Learning of Interference Channels in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Tsakmalis, Anestis; Chatzinotas, Symeon; Ottersten, Bjorn

    2018-02-01

    In this paper, a sequential probing method for interference constraint learning is proposed to allow a centralized Cognitive Radio Network (CRN) accessing the frequency band of a Primary User (PU) in an underlay cognitive scenario with a designed PU protection specification. The main idea is that the CRN probes the PU and subsequently eavesdrops the reverse PU link to acquire the binary ACK/NACK packet. This feedback indicates whether the probing-induced interference is harmful or not and can be used to learn the PU interference constraint. The cognitive part of this sequential probing process is the selection of the power levels of the Secondary Users (SUs) which aims to learn the PU interference constraint with a minimum number of probing attempts while setting a limit on the number of harmful probing-induced interference events or equivalently of NACK packet observations over a time window. This constrained design problem is studied within the Active Learning (AL) framework and an optimal solution is derived and implemented with a sophisticated, accurate and fast Bayesian Learning method, the Expectation Propagation (EP). The performance of this solution is also demonstrated through numerical simulations and compared with modified versions of AL techniques we developed in earlier work.

  17. Benchmarking optimization software with COPS 3.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolan, E. D.; More, J. J.; Munson, T. S.

    2004-05-24

    The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They also provide a comparison of the FILTER, KNITRO, LOQO, MINOS, and SNOPT solvers on these problems.

  18. Evaluating data worth for ground-water management under uncertainty

    USGS Publications Warehouse

    Wagner, B.J.

    1999-01-01

    A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.

  19. Experimental and simulation studies of multivariable adaptive optimization of continuous bioreactors using bilevel forgetting factors.

    PubMed

    Chang, Y K; Lim, H C

    1989-08-20

    A multivariable on-line adaptive optimization algorithm using a bilevel forgetting factor method was developed and applied to a continuous baker's yeast culture in simulation and experimental studies to maximize the cellular productivity by manipulating the dilution rate and the temperature. The algorithm showed a good optimization speed and a good adaptability and reoptimization capability. The algorithm was able to stably maintain the process around the optimum point for an extended period of time. Two cases were investigated: an unconstrained and a constrained optimization. In the constrained optimization the ethanol concentration was used as an index for the baking quality of yeast cells. An equality constraint with a quadratic penalty was imposed on the ethanol concentration to keep its level close to a hypothetical "optimum" value. The developed algorithm was experimentally applied to a baker's yeast culture to demonstrate its validity. Only unconstrained optimization was carried out experimentally. A set of tuning parameter values was suggested after evaluating the results from several experimental runs. With those tuning parameter values the optimization took 50-90 h. At the attained steady state the dilution rate was 0.310 h(-1) the temperature 32.8 degrees C, and the cellular productivity 1.50 g/L/h.

  20. A subgradient approach for constrained binary optimization via quantum adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Karimi, Sahar; Ronagh, Pooya

    2017-08-01

    Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.

  1. An intelligent emissions controller for fuel lean gas reburn in coal-fired power plants.

    PubMed

    Reifman, J; Feldman, E E; Wei, T Y; Glickert, R W

    2000-02-01

    The application of artificial intelligence techniques for performance optimization of the fuel lean gas reburn (FLGR) system is investigated. A multilayer, feedforward artificial neural network is applied to model static nonlinear relationships between the distribution of injected natural gas into the upper region of the furnace of a coal-fired boiler and the corresponding oxides of nitrogen (NOx) emissions exiting the furnace. Based on this model, optimal distributions of injected gas are determined such that the largest NOx reduction is achieved for each value of total injected gas. This optimization is accomplished through the development of a new optimization method based on neural networks. This new optimal control algorithm, which can be used as an alternative generic tool for solving multidimensional nonlinear constrained optimization problems, is described and its results are successfully validated against an off-the-shelf tool for solving mathematical programming problems. Encouraging results obtained using plant data from one of Commonwealth Edison's coal-fired electric power plants demonstrate the feasibility of the overall approach. Preliminary results show that the use of this intelligent controller will also enable the determination of the most cost-effective operating conditions of the FLGR system by considering, along with the optimal distribution of the injected gas, the cost differential between natural gas and coal and the open-market price of NOx emission credits. Further study, however, is necessary, including the construction of a more comprehensive database, needed to develop high-fidelity process models and to add carbon monoxide (CO) emissions to the model of the gas reburn system.

  2. D-Optimal Experimental Design for Contaminant Source Identification

    NASA Astrophysics Data System (ADS)

    Sai Baba, A. K.; Alexanderian, A.

    2016-12-01

    Contaminant source identification seeks to estimate the release history of a conservative solute given point concentration measurements at some time after the release. This can be mathematically expressed as an inverse problem, with a linear observation operator or a parameter-to-observation map, which we tackle using a Bayesian approach. Acquisition of experimental data can be laborious and expensive. The goal is to control the experimental parameters - in our case, the sparsity of the sensors, to maximize the information gain subject to some physical or budget constraints. This is known as optimal experimental design (OED). D-optimal experimental design seeks to maximize the expected information gain, and has long been considered the gold standard in the statistics community. Our goal is to develop scalable methods for D-optimal experimental designs involving large-scale PDE constrained problems with high-dimensional parameter fields. A major challenge for the OED, is that a nonlinear optimization algorithm for the D-optimality criterion requires repeated evaluation of objective function and gradient involving the determinant of large and dense matrices - this cost can be prohibitively expensive for applications of interest. We propose novel randomized matrix techniques that bring down the computational costs of the objective function and gradient evaluations by several orders of magnitude compared to the naive approach. The effect of randomized estimators on the accuracy and the convergence of the optimization solver will be discussed. The features and benefits of our new approach will be demonstrated on a challenging model problem from contaminant source identification involving the inference of the initial condition from spatio-temporal observations in a time-dependent advection-diffusion problem.

  3. Rapid Slewing of Flexible Space Structures

    DTIC Science & Technology

    2015-09-01

    axis gimbal with elastic joints. The performance of the system can be enhanced by designing antenna maneuvers in which the flexible effects are...the effects of the nonlinearities so the vibrational motion can be constrained for a time-optimal slew. It is shown that by constructing an...joints. The performance of the system can be enhanced by designing antenna maneuvers in which the flexible effects are properly constrained, thus

  4. Depletion mapping and constrained optimization to support managing groundwater extraction

    USGS Publications Warehouse

    Fienen, Michael N.; Bradbury, Kenneth R.; Kniffin, Maribeth; Barlow, Paul M.

    2018-01-01

    Groundwater models often serve as management tools to evaluate competing water uses including ecosystems, irrigated agriculture, industry, municipal supply, and others. Depletion potential mapping—showing the model-calculated potential impacts that wells have on stream baseflow—can form the basis for multiple potential management approaches in an oversubscribed basin. Specific management approaches can include scenarios proposed by stakeholders, systematic changes in well pumping based on depletion potential, and formal constrained optimization, which can be used to quantify the tradeoff between water use and stream baseflow. Variables such as the maximum amount of reduction allowed in each well and various groupings of wells using, for example, K-means clustering considering spatial proximity and depletion potential are considered. These approaches provide a potential starting point and guidance for resource managers and stakeholders to make decisions about groundwater management in a basin, spreading responsibility in different ways. We illustrate these approaches in the Little Plover River basin in central Wisconsin, United States—home to a rich agricultural tradition, with farmland and urban areas both in close proximity to a groundwater-dependent trout stream. Groundwater withdrawals have reduced baseflow supplying the Little Plover River below a legally established minimum. The techniques in this work were developed in response to engaged stakeholders with various interests and goals for the basin. They sought to develop a collaborative management plan at a watershed scale that restores the flow rate in the river in a manner that incorporates principles of shared governance and results in effective and minimally disruptive changes in groundwater extraction practices.

  5. Depletion Mapping and Constrained Optimization to Support Managing Groundwater Extraction.

    PubMed

    Fienen, Michael N; Bradbury, Kenneth R; Kniffin, Maribeth; Barlow, Paul M

    2018-01-01

    Groundwater models often serve as management tools to evaluate competing water uses including ecosystems, irrigated agriculture, industry, municipal supply, and others. Depletion potential mapping-showing the model-calculated potential impacts that wells have on stream baseflow-can form the basis for multiple potential management approaches in an oversubscribed basin. Specific management approaches can include scenarios proposed by stakeholders, systematic changes in well pumping based on depletion potential, and formal constrained optimization, which can be used to quantify the tradeoff between water use and stream baseflow. Variables such as the maximum amount of reduction allowed in each well and various groupings of wells using, for example, K-means clustering considering spatial proximity and depletion potential are considered. These approaches provide a potential starting point and guidance for resource managers and stakeholders to make decisions about groundwater management in a basin, spreading responsibility in different ways. We illustrate these approaches in the Little Plover River basin in central Wisconsin, United States-home to a rich agricultural tradition, with farmland and urban areas both in close proximity to a groundwater-dependent trout stream. Groundwater withdrawals have reduced baseflow supplying the Little Plover River below a legally established minimum. The techniques in this work were developed in response to engaged stakeholders with various interests and goals for the basin. They sought to develop a collaborative management plan at a watershed scale that restores the flow rate in the river in a manner that incorporates principles of shared governance and results in effective and minimally disruptive changes in groundwater extraction practices. © 2017, National Ground Water Association.

  6. Pyrrolidine-constrained phenethylamines: The design of potent, selective, and pharmacologically efficacious dipeptidyl peptidase IV (DPP4) inhibitors from a lead-like screening hit.

    PubMed

    Backes, Bradley J; Longenecker, Kenton; Hamilton, Gregory L; Stewart, Kent; Lai, Chunqiu; Kopecka, Hana; von Geldern, Thomas W; Madar, David J; Pei, Zhonghua; Lubben, Thomas H; Zinker, Bradley A; Tian, Zhenping; Ballaron, Stephen J; Stashko, Michael A; Mika, Amanda K; Beno, David W A; Kempf-Grote, Anita J; Black-Schaefer, Candace; Sham, Hing L; Trevillyan, James M

    2007-04-01

    A novel series of pyrrolidine-constrained phenethylamines were developed as dipeptidyl peptidase IV (DPP4) inhibitors for the treatment of type 2 diabetes. The cyclohexene ring of lead-like screening hit 5 was replaced with a pyrrolidine to enable parallel chemistry, and protein co-crystal structural data guided the optimization of N-substituents. Employing this strategy, a >400x improvement in potency over the initial hit was realized in rapid fashion. Optimized compounds are potent and selective inhibitors with excellent pharmacokinetic profiles. Compound 30 was efficacious in vivo, lowering blood glucose in ZDF rats that were allowed to feed freely on a mixed meal.

  7. Statistical mechanics of budget-constrained auctions

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-07-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.

  8. Two tradeoffs between economy and reliability in loss of load probability constrained unit commitment

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Wang, Mingqiang; Ning, Xingyao

    2018-02-01

    Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.

  9. Economic optimization of the energy transport component of a large distributed solar power plant

    NASA Technical Reports Server (NTRS)

    Turner, R. H.

    1976-01-01

    A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.

  10. Reinforcement learning solution for HJB equation arising in constrained optimal control problem.

    PubMed

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong

    2015-11-01

    The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Application’s Method of Quadratic Programming for Optimization of Portfolio Selection

    NASA Astrophysics Data System (ADS)

    Kawamoto, Shigeru; Takamoto, Masanori; Kobayashi, Yasuhiro

    Investors or fund-managers face with optimization of portfolio selection, which means that determine the kind and the quantity of investment among several brands. We have developed a method to obtain optimal stock’s portfolio more rapidly from twice to three times than conventional method with efficient universal optimization. The method is characterized by quadratic matrix of utility function and constrained matrices divided into several sub-matrices by focusing on structure of these matrices.

  12. Robust fuel- and time-optimal control of uncertain flexible space structures

    NASA Technical Reports Server (NTRS)

    Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken

    1993-01-01

    The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.

  13. SU-E-T-551: Monitor Unit Optimization in Stereotactic Body Radiation Therapy for Stage I Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, B-T; Lu, J-Y

    2015-06-15

    Purpose: The study aims to reduce the monitor units (MUs) in the stereotactic body radiation therapy (SBRT) treatment for lung cancer by adjusting the optimizing parameters. Methods: Fourteen patients suffered from stage I Non-Small Cell Lung Cancer (NSCLC) were enrolled. Three groups of parameters were adjusted to investigate their effects on MU numbers and organs at risk (OARs) sparing: (1) the upper objective of planning target volume (UOPTV); (2) strength setting in the MU constraining objective; (3) max MU setting in the MU constraining objective. Results: We found that the parameters in the optimizer influenced the MU numbers in amore » priority, strength and max MU dependent manner. MU numbers showed a decreasing trend with the UOPTV increasing. MU numbers with low, medium and high priority for the UOPTV were 428±54, 312±48 and 258±31 MU/Gy, respectively. High priority for UOPTV also spared the heart, cord and lung while maintaining comparable PTV coverage than the low and medium priority group. It was observed that MU numbers tended to decrease with the strength increasing and max MU setting decreasing. With maximum strength, the MU numbers reached its minimum while maintaining comparable or improved dose to the normal tissues. It was also found that the MU numbers continued to decline at 85% and 75% max MU setting but no longer to decrease at 50% and 25%. Combined with high priority for UOPTV and MU constraining objectives, the MU numbers can be decreased as low as 223±26 MU/Gy. Conclusion:: The priority of UOPTV, MU constraining objective in the optimizer impact on the MU numbers in SBRT treatment for lung cancer. Giving high priority to the UOPTV, setting the strength to maximum value and the max MU to 50% in the MU objective achieves the lowest MU numbers while maintaining comparable or improved OAR sparing.« less

  14. Optimization-based channel constrained data aggregation routing algorithms in multi-radio wireless sensor networks.

    PubMed

    Yen, Hong-Hsu

    2009-01-01

    In wireless sensor networks, data aggregation routing could reduce the number of data transmissions so as to achieve energy efficient transmission. However, data aggregation introduces data retransmission that is caused by co-channel interference from neighboring sensor nodes. This kind of co-channel interference could result in extra energy consumption and significant latency from retransmission. This will jeopardize the benefits of data aggregation. One possible solution to circumvent data retransmission caused by co-channel interference is to assign different channels to every sensor node that is within each other's interference range on the data aggregation tree. By associating each radio with a different channel, a sensor node could receive data from all the children nodes on the data aggregation tree simultaneously. This could reduce the latency from the data source nodes back to the sink so as to meet the user's delay QoS. Since the number of radios on each sensor node and the number of non-overlapping channels are all limited resources in wireless sensor networks, a challenging question here is to minimize the total transmission cost under limited number of non-overlapping channels in multi-radio wireless sensor networks. This channel constrained data aggregation routing problem in multi-radio wireless sensor networks is an NP-hard problem. I first model this problem as a mixed integer and linear programming problem where the objective is to minimize the total transmission subject to the data aggregation routing, channel and radio resources constraints. The solution approach is based on the Lagrangean relaxation technique to relax some constraints into the objective function and then to derive a set of independent subproblems. By optimally solving these subproblems, it can not only calculate the lower bound of the original primal problem but also provide useful information to get the primal feasible solutions. By incorporating these Lagrangean multipliers as the link arc weight, the optimization-based heuristics are proposed to get energy-efficient data aggregation tree with better resource (channel and radio) utilization. From the computational experiments, the proposed optimization-based approach is superior to existing heuristics under all tested cases.

  15. Method and apparatus for dispensing small quantities of mercury from evacuated and sealed glass capsules

    DOEpatents

    Grossman, Mark W.; George, William A.; Pai, Robert Y.

    1985-01-01

    A technique for opening an evacuated and sealed glass capsule containing a material that is to be dispensed which has a relatively high vapor pressure such as mercury. The capsule is typically disposed in a discharge tube envelope. The technique involves the use of a first light source imaged along the capsule and a second light source imaged across the capsule substantially transversely to the imaging of the first light source. Means are provided for constraining a segment of the capsule along its length with the constraining means being positioned to correspond with the imaging of the second light source. These light sources are preferably incandescent projection lamps. The constraining means is preferably a multiple looped wire support.

  16. Solving LP Relaxations of Large-Scale Precedence Constrained Problems

    NASA Astrophysics Data System (ADS)

    Bienstock, Daniel; Zuckerberg, Mark

    We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.

  17. APPLICATION OF A BIP CONSTRAINED OPTIMIZATION MODEL COMBINED WITH NASA's ATLAS MODEL TO OPTIMIZE THE SOCIETAL BENEFITS OF THE USA's INTERNATIONAL SPACE EXPLORATION AND UTILIZATION INITIATIVE OF 1/14/04

    NASA Technical Reports Server (NTRS)

    Morgenthaler, George W.; Glover, Fred W.; Woodcock, Gordon R.; Laguna, Manuel

    2005-01-01

    The 1/14/04 USA Space Exploratiofltilization Initiative invites all Space-faring Nations, all Space User Groups in Science, Space Entrepreneuring, Advocates of Robotic and Human Space Exploration, Space Tourism and Colonization Promoters, etc., to join an International Space Partnership. With more Space-faring Nations and Space User Groups each year, such a Partnership would require Multi-year (35 yr.-45 yr.) Space Mission Planning. With each Nation and Space User Group demanding priority for its missions, one needs a methodology for obiectively selecting the best mission sequences to be added annually to this 45 yr. Moving Space Mission Plan. How can this be done? Planners have suggested building a Reusable, Sustainable, Space Transportation Infrastructure (RSSn) to increase Mission synergism, reduce cost, and increase scientific and societal returns from this Space Initiative. Morgenthaler and Woodcock presented a Paper at the 55th IAC, Vancouver B.C., Canada, entitled Constrained Optimization Models For Optimizing Multi - Year Space Programs. This Paper showed that a Binary Integer Programming (BIP) Constrained Optimization Model combined with the NASA ATLAS Cost and Space System Operational Parameter Estimating Model has the theoretical capability to solve such problems. IAA Commission III, Space Technology and Space System Development, in its ACADEMY DAY meeting at Vancouver, requested that the Authors and NASA experts find several Space Exploration Architectures (SEAS), apply the combined BIP/ATLAS Models, and report the results at the 56th Fukuoka IAC. While the mathematical Model is in Ref.[2] this Paper presents the Application saga of that effort.

  18. Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2002-01-01

    This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.

  19. Exponential Arithmetic Based Self-Healing Group Key Distribution Scheme with Backward Secrecy under the Resource-Constrained Wireless Networks

    PubMed Central

    Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun

    2016-01-01

    In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550

  20. Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations

    PubMed Central

    Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad

    2013-01-01

    Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194

  1. A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352

    2015-09-01

    In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less

  2. Improving Free-Piston Stirling Engine Specific Power

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell Henry

    2014-01-01

    This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling Engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.

  3. Improving Free-Piston Stirling Engine Specific Power

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell H.

    2015-01-01

    This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.

  4. Design of Launch Vehicle Flight Control Systems Using Ascent Vehicle Stability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Jang, Jiann-Woei; Alaniz, Abran; Hall, Robert; Bedossian, Nazareth; Hall, Charles; Jackson, Mark

    2011-01-01

    A launch vehicle represents a complicated flex-body structural environment for flight control system design. The Ascent-vehicle Stability Analysis Tool (ASAT) is developed to address the complicity in design and analysis of a launch vehicle. The design objective for the flight control system of a launch vehicle is to best follow guidance commands while robustly maintaining system stability. A constrained optimization approach takes the advantage of modern computational control techniques to simultaneously design multiple control systems in compliance with required design specs. "Tower Clearance" and "Load Relief" designs have been achieved for liftoff and max dynamic pressure flight regions, respectively, in the presence of large wind disturbances. The robustness of the flight control system designs has been verified in the frequency domain Monte Carlo analysis using ASAT.

  5. Structural optimization via a design space hierarchy

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.

    1976-01-01

    Mathematical programming techniques provide a general approach to automated structural design. An iterative method is proposed in which design is treated as a hierarchy of subproblems, one being locally constrained and the other being locally unconstrained. It is assumed that the design space is locally convex in the case of good initial designs and that the objective and constraint functions are continuous, with continuous first derivatives. A general design algorithm is outlined for finding a move direction which will decrease the value of the objective function while maintaining a feasible design. The case of one-dimensional search in a two-variable design space is discussed. Possible applications are discussed. A major feature of the proposed algorithm is its application to problems which are inherently ill-conditioned, such as design of structures for optimum geometry.

  6. Packing optimization for automated generation of complex system's initial configurations for molecular dynamics and docking.

    PubMed

    Martínez, José Mario; Martínez, Leandro

    2003-05-01

    Molecular Dynamics is a powerful methodology for the comprehension at molecular level of many chemical and biochemical systems. The theories and techniques developed for structural and thermodynamic analyses are well established, and many software packages are available. However, designing starting configurations for dynamics can be cumbersome. Easily generated regular lattices can be used when simple liquids or mixtures are studied. However, for complex mixtures, polymer solutions or solid adsorbed liquids (for example) this approach is inefficient, and it turns out to be very hard to obtain an adequate coordinate file. In this article, the problem of obtaining an adequate initial configuration is treated as a "packing" problem and solved by an optimization procedure. The initial configuration is chosen in such a way that the minimum distance between atoms of different molecules is greater than a fixed tolerance. The optimization uses a well-known algorithm for box-constrained minimization. Applications are given for biomolecule solvation, many-component mixtures, and interfaces. This approach can reduce the work of designing starting configurations from days or weeks to few minutes or hours, in an automated fashion. Packing optimization is also shown to be a powerful methodology for space search in docking of small ligands to proteins. This is demonstrated by docking of the thyroid hormone to its nuclear receptor. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 819-825, 2003

  7. A Simple Method for Amplifying RNA Targets (SMART)

    PubMed Central

    McCalla, Stephanie E.; Ong, Carmichael; Sarma, Aartik; Opal, Steven M.; Artenstein, Andrew W.; Tripathi, Anubhav

    2012-01-01

    We present a novel and simple method for amplifying RNA targets (named by its acronym, SMART), and for detection, using engineered amplification probes that overcome existing limitations of current RNA-based technologies. This system amplifies and detects optimal engineered ssDNA probes that hybridize to target RNA. The amplifiable probe-target RNA complex is captured on magnetic beads using a sequence-specific capture probe and is separated from unbound probe using a novel microfluidic technique. Hybridization sequences are not constrained as they are in conventional target-amplification reactions such as nucleic acid sequence amplification (NASBA). Our engineered ssDNA probe was amplified both off-chip and in a microchip reservoir at the end of the separation microchannel using isothermal NASBA. Optimal solution conditions for ssDNA amplification were investigated. Although KCl and MgCl2 are typically found in NASBA reactions, replacing 70 mmol/L of the 82 mmol/L total chloride ions with acetate resulted in optimal reaction conditions, particularly for low but clinically relevant probe concentrations (≤100 fmol/L). With the optimal probe design and solution conditions, we also successfully removed the initial heating step of NASBA, thus achieving a true isothermal reaction. The SMART assay using a synthetic model influenza DNA target sequence served as a fundamental demonstration of the efficacy of the capture and microfluidic separation system, thus bridging our system to a clinically relevant detection problem. PMID:22691910

  8. Non Linear Programming (NLP) Formulation for Quantitative Modeling of Protein Signal Transduction Pathways

    PubMed Central

    Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239

  9. Adaptive sparsest narrow-band decomposition method and its applications to rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao

    2017-02-01

    Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.

  10. Non Linear Programming (NLP) formulation for quantitative modeling of protein signal transduction pathways.

    PubMed

    Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G

    2012-01-01

    Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.

  11. Dynamic Resource Management for Parallel Tasks in an Oversubscribed Energy-Constrained Heterogeneous Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imam, Neena; Koenig, Gregory A; Machovec, Dylan

    2016-01-01

    Abstract: The worth of completing parallel tasks is modeled using utility functions, which monotonically-decrease with time and represent the importance and urgency of a task. These functions define the utility earned by a task at the time of its completion. The performance of such a system is measured as the total utility earned by all completed tasks over some interval of time (e.g., 24 hours). To maximize system performance when scheduling dynamically arriving parallel tasks onto a high performance computing (HPC) system that is oversubscribed and energy-constrained, we have designed, analyzed, and compared different heuristic techniques. Four utility-aware heuristics (i.e.,more » Max Utility, Max Utility-per-Time, Max Utility-per-Resource, and Max Utility-per-Energy), three FCFS-based heuristics (Conservative Backfilling, EASY Backfilling, and FCFS with Multiple Queues), and a Random heuristic were examined in this study. A technique that is often used with the FCFS-based heuristics is the concept of a permanent reservation. We compare the performance of permanent reservations with temporary place-holders to demonstrate the advantages that place-holders can provide. We also present a novel energy filtering technique that constrains the maximum energy-per-resource used by each task. We conducted a simulation study to evaluate the performance of these heuristics and techniques in an energy-constrained oversubscribed HPC environment. With place-holders, energy filtering, and dropping tasks with low potential utility, our utility-aware heuristics are able to significantly outperform the existing FCFS-based techniques.« less

  12. A general-purpose optimization program for engineering design

    NASA Technical Reports Server (NTRS)

    Vanderplaats, G. N.; Sugimoto, H.

    1986-01-01

    A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.

  13. A chance constraint estimation approach to optimizing resource management under uncertainty

    Treesearch

    Michael Bevers

    2007-01-01

    Chance-constrained optimization is an important method for managing risk arising from random variations in natural resource systems, but the probabilistic formulations often pose mathematical programming problems that cannot be solved with exact methods. A heuristic estimation method for these problems is presented that combines a formulation for order statistic...

  14. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  15. A DAG Scheduling Scheme on Heterogeneous Computing Systems Using Tuple-Based Chemical Reaction Optimization

    PubMed Central

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977

  16. A DAG scheduling scheme on heterogeneous computing systems using tuple-based chemical reaction optimization.

    PubMed

    Jiang, Yuyi; Shao, Zhiqing; Guo, Yi

    2014-01-01

    A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.

  17. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  18. Microgrid Optimal Scheduling With Chance-Constrained Islanding Capability

    DOE PAGES

    Liu, Guodong; Starke, Michael R.; Xiao, B.; ...

    2017-01-13

    To facilitate the integration of variable renewable generation and improve the resilience of electricity sup-ply in a microgrid, this paper proposes an optimal scheduling strategy for microgrid operation considering constraints of islanding capability. A new concept, probability of successful islanding (PSI), indicating the probability that a microgrid maintains enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation after instantaneously islanding from the main grid, is developed. The PSI is formulated as mixed-integer linear program using multi-interval approximation taking into account the probability distributions of forecast errors of wind, PV and load. With themore » goal of minimizing the total operating cost while preserving user specified PSI, a chance-constrained optimization problem is formulated for the optimal scheduling of mirogrids and solved by mixed integer linear programming (MILP). Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling strategy. Lastly, we verify the relationship between PSI and various factors.« less

  19. Trajectory optimization for the National aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1993-01-01

    While continuing the application of the inverse dynamics approach in obtaining the optimal numerical solutions, the research during the past six months has been focused on the formulation and derivation of closed-form solutions for constrained hypersonic flight trajectories. Since it was found in the research of the first year that a dominant portion of the optimal ascent trajectory of the aerospace plane is constrained by dynamic pressure and heating constraints, the application of the analytical solutions significantly enhances the efficiency in trajectory optimization, provides a better insight to understanding of the trajectory and conceivably has great potential in guidance of the vehicle. Work of this period has been reported in four technical papers. Two of the papers were presented in the AIAA Guidance, Navigation, and Control Conference (Hilton Head, SC, August, 1992) and Fourth International Aerospace Planes Conference (Orlando, FL, December, 1992). The other two papers have been accepted for publication by Journal of Guidance, Control, and Dynamics, and will appear in 1993. This report briefly summarizes the work done in the past six months and work currently underway.

  20. Joint Chance-Constrained Dynamic Programming

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob

    2012-01-01

    This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.

  1. A formulation and analysis of combat games

    NASA Technical Reports Server (NTRS)

    Heymann, M.; Ardema, M. D.; Rajan, N.

    1985-01-01

    Combat is formulated as a dynamical encounter between two opponents, each of whom has offensive capabilities and objectives. With each opponent is associated a target in the event space in which he endeavors to terminate the combat, thereby winning. If the combat terminates in both target sets simultaneously or in neither, a joint capture or a draw, respectively, is said to occur. Resolution of the encounter is formulated as a combat game; namely, as a pair of competing event-constrained differential games. If exactly one of the players can win, the optimal strategies are determined from a resulting constrained zero-sum differential game. Otherwise the optimal strategies are computed from a resulting non-zero-sum game. Since optimal combat strategies frequencies may not exist, approximate of delta-combat games are also formulated leading to approximate or delta-optimal strategies. To illustrate combat games, an example, called the turret game, is considered. This game may be thought of as a highly simplified model of air combat, yet it is sufficiently complex to exhibit a rich variety of combat behavior, much of which is not found in pursuit-evasion games.

  2. Nonlinear model predictive control of a wave energy converter based on differential flatness parameterisation

    NASA Astrophysics Data System (ADS)

    Li, Guang

    2017-01-01

    This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.

  3. Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.

    PubMed

    Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei

    2015-08-01

    In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.

  4. Effect of leading-edge load constraints on the design and performance of supersonic wings

    NASA Technical Reports Server (NTRS)

    Darden, C. M.

    1985-01-01

    A theoretical and experimental investigation was conducted to assess the effect of leading-edge load constraints on supersonic wing design and performance. In the effort to delay flow separation and the formation of leading-edge vortices, two constrained, linear-theory optimization approaches were used to limit the loadings on the leading edge of a variable-sweep planform design. Experimental force and moment tests were made on two constrained camber wings, a flat uncambered wing, and an optimum design with no constraints. Results indicate that vortex strength and separation regions were mildest on the severely and moderately constrained wings.

  5. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  6. Analysis of polarization in hydrogen bonded complexes: An asymptotic projection approach

    NASA Astrophysics Data System (ADS)

    Drici, Nedjoua

    2018-03-01

    The asymptotic projection technique is used to investigate the polarization effect that arises from the interaction between the relaxed, and frozen monomeric charge densities of a set of neutral and charged hydrogen bonded complexes. The AP technique based on the resolution of the original Kohn-Sham equations can give an acceptable qualitative description of the polarization effect in neutral complexes. The significant overlap of the electron densities, in charged and π-conjugated complexes, impose further development of a new functional, describing the coupling between constrained and non-constrained electron densities within the AP technique to provide an accurate representation of the polarization effect.

  7. A New Satellite Aerosol Retrieval Using High Spectral Resolution Oxygen A-Band Measurements

    NASA Astrophysics Data System (ADS)

    Winker, D. M.; Zhai, P.

    2014-12-01

    Efforts to advance current satellite aerosol retrieval capabilities have mostly focused on polarimetric techniques. While there has been much interest in recent decades in the use of the oxygen A-band for retrievals of cloud height or surface pressure, these techniques are mostly based on A-band measurements with relatively low spectral resolution. We report here on a new aerosol retrieval technique based on high-resolution A-band spectra. Our goal is the development of a technique to retrieve aerosol absorption, one of the critical parameters affecting the global radiation budget and one which is currently poorly constrained by satellite measurements. Our approach relies on two key factors: 1) the use of high spectral resolution measurements which resolve the A-band line structure, and 2) the use of co-located lidar profile measurements to constrain the vertical distribution of scatterers. The OCO-2 satellite, launched in July this year and now flying in formation with the CALIPSO satellite, carries an oxygen A-band spectrometer with a spectral resolution of 21,000:1. This is sufficient to resolve the A-band line structure, which contains information on atmospheric photon path lengths. Combining channels with oxygen absorption ranging from weak to strong allows the separation of atmospheric and surface scattering. An optimal estimation algorithm for simultaneous retrieval of aerosol optical depth, aerosol absorption, and surface albedo has been developed. Lidar profile data is used for scene identification and to provide constraints on the vertical distribution of scatterers. As calibrated OCO-2 data is not expected until the end of this year, the algorithm has been developed and tested using simulated OCO-2 spectra. The simulations show that AOD and surface albedo can be retrieved with high accuracy. Retrievals of aerosol single scatter albedo are encouraging, showing good performance when AOD is larger than about 0.15. Retrieval performance improves as the albedo of the underlying surface increases. Thus, the technique shows great promise for retrieving the absorption optical depth of aerosols located above clouds. This presentation will discuss the basis of the approach and results of the A-band/lidar retrievals based on simulated data.

  8. Offshore seismicity in the southeastern sea of Korea

    NASA Astrophysics Data System (ADS)

    Park, H.; Kang, T. S.

    2017-12-01

    The offshore southeastern sea area of Korea appear to have a slightly higher seismicity compared to the rest of the Korean Peninsula. According to the earthquake report by Korean Meteorological Administration (KMA), earthquakes over ML 3 has persistently occurred over once a year during the last ten years. In this study, we used 33 events in KMA catalog, which occurred in the offshore Ulsan (35.0°N-35.85°N, 129.45°E-130.75°E) from April 2007 to June 2017, as mother earthquakes. The waveform matching filter technique was used to precisely detect microearthquakes (child earthquakes) that occurred after mother earthquakes. It is the optimal linear filter for maximizing the signal-to-noise ratio in the presence of additive stochastic noise. Initially, we used the continuous seismic waveforms available from KMA and the Korea Institute of Geosciences and Mineral Resources. We added the data of F-net to increase the reliability of the results. The detected events were located by using P- and S-wave arrival times. The hypocentral depths were constrained by an iterative optimal solution technique which is proven to be effective under the poorly known structure. Focal mechanism solutions were obtained from the analysis of P-wave first-motion polarities. Seismicity patterns of microearthquakes and their focal mechanism results were analyzed to understand their seismogenic characteristics and their relationship to subsea seismotectonic structures.

  9. WE-FG-207B-05: Iterative Reconstruction Via Prior Image Constrained Total Generalized Variation for Spectral CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, S; Zhang, Y; Ma, J

    Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less

  10. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  11. Face verification with balanced thresholds.

    PubMed

    Yan, Shuicheng; Xu, Dong; Tang, Xiaoou

    2007-01-01

    The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.

  12. Using prior information to separate the temperature response to greenhouse gas forcing from that of aerosols - Estimating the transient climate response

    NASA Astrophysics Data System (ADS)

    Schurer, Andrew; Hegerl, Gabriele

    2016-04-01

    The evaluation of the transient climate response (TCR) is of critical importance to policy makers as it can be used to calculate a simple estimate of the expected warming given predicted greenhouse gas emissions. Previous studies using optimal detection techniques have been able to estimate a TCR value from the historic record using simulations from some of the models which took part in the Coupled Model Intercomparison Project Phase 5 (CMIP5) but have found that others give unconstrained results. At least partly this is due to degeneracy between the greenhouse gas and aerosol signals which makes separation of the temperature response to these forcings problematic. Here we re-visit this important topic by using an adapted optimal detection analysis within a Bayesian framework. We account for observational uncertainty by the use of an ensemble of instrumental observations, and model uncertainty by combining the results from several different models. This framework allows the use of prior information which is found to help separate the response to the different forcings leading to a more constrained estimate of TCR.

  13. Sensitivity of Space Station alpha joint robust controller to structural modal parameter variations

    NASA Technical Reports Server (NTRS)

    Kumar, Renjith R.; Cooper, Paul A.; Lim, Tae W.

    1991-01-01

    The photovoltaic array sun tracking control system of Space Station Freedom is described. A synthesis procedure for determining optimized values of the design variables of the control system is developed using a constrained optimization technique. The synthesis is performed to provide a given level of stability margin, to achieve the most responsive tracking performance, and to meet other design requirements. Performance of the baseline design, which is synthesized using predicted structural characteristics, is discussed and the sensitivity of the stability margin is examined for variations of the frequencies, mode shapes and damping ratios of dominant structural modes. The design provides enough robustness to tolerate a sizeable error in the predicted modal parameters. A study was made of the sensitivity of performance indicators as the modal parameters of the dominant modes vary. The design variables are resynthesized for varying modal parameters in order to achieve the most responsive tracking performance while satisfying the design requirements. This procedure of reoptimization design parameters would be useful in improving the control system performance if accurate model data are provided.

  14. A real-time approximate optimal guidance law for flight in a plane

    NASA Technical Reports Server (NTRS)

    Feeley, Timothy S.; Speyer, Jason L.

    1990-01-01

    A real-time guidance scheme is presented for the problem of maximizing the payload into orbit subject to the equations of motion of a rocket over a nonrotating spherical earth. The flight is constrained to a path in the equatorial plane while reaching an orbital altitude at orbital injection speeds. The dynamics of the problem can be separated into primary and perturbation effects by a small parameter, epsilon, which is the ratio of the atmospheric scale height to the radius of the earth. The Hamilton-Jacobi-Bellman or dynamic programming equation is expanded in an asymptotic series where the zeroth-order term (epsilon = 0) can be obtained in closed form. The neglected perturbation terms are included in the higher-order terms of the expansion, which are determined from the solution of first-order linear partial differential equations requiring only integrations which are quadratures. The quadratures can be performed rapidly with emerging computer capability, so that real-time approximate optimization can be used to construct the launch guidance law. The application of this technique to flight in three-dimensions is made apparent from the solution presented.

  15. An efficient method of reducing glass dispersion tolerance sensitivity

    NASA Astrophysics Data System (ADS)

    Sparrold, Scott W.; Shepard, R. Hamilton

    2014-12-01

    Constraining the Seidel aberrations of optical surfaces is a common technique for relaxing tolerance sensitivities in the optimization process. We offer an observation that a lens's Abbe number tolerance is directly related to the magnitude by which its longitudinal and transverse color are permitted to vary in production. Based on this observation, we propose a computationally efficient and easy-to-use merit function constraint for relaxing dispersion tolerance sensitivity. Using the relationship between an element's chromatic aberration and dispersion sensitivity, we derive a fundamental limit for lens scale and power that is capable of achieving high production yield for a given performance specification, which provides insight on the point at which lens splitting or melt fitting becomes necessary. The theory is validated by comparing its predictions to a formal tolerance analysis of a Cooke Triplet, and then applied to the design of a 1.5x visible linescan lens to illustrate optimization for reduced dispersion sensitivity. A selection of lenses in high volume production is then used to corroborate the proposed method of dispersion tolerance allocation.

  16. Optimal and fast E/B separation with a dual messenger field

    NASA Astrophysics Data System (ADS)

    Kodi Ramanah, Doogesh; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-05-01

    We adapt our recently proposed dual messenger algorithm for spin field reconstruction and showcase its efficiency and effectiveness in Wiener filtering polarized cosmic microwave background (CMB) maps. Unlike conventional preconditioned conjugate gradient (PCG) solvers, our preconditioner-free technique can deal with high-resolution joint temperature and polarization maps with inhomogeneous noise distributions and arbitrary mask geometries with relative ease. Various convergence diagnostics illustrate the high quality of the dual messenger reconstruction. In contrast, the PCG implementation fails to converge to a reasonable solution for the specific problem considered. The implementation of the dual messenger method is straightforward and guarantees numerical stability and convergence. We show how the algorithm can be modified to generate fluctuation maps, which, combined with the Wiener filter solution, yield unbiased constrained signal realizations, consistent with observed data. This algorithm presents a pathway to exact global analyses of high-resolution and high-sensitivity CMB data for a statistically optimal separation of E and B modes. It is therefore relevant for current and next-generation CMB experiments, in the quest for the elusive primordial B-mode signal.

  17. An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization

    DTIC Science & Technology

    2012-08-17

    the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2

  18. Method and apparatus for dispensing small quantities of mercury from evacuated and sealed glass capsules

    DOEpatents

    Grossman, M.W.; George, W.A.; Pai, R.Y.

    1985-08-13

    A technique is disclosed for opening an evacuated and sealed glass capsule containing a material that is to be dispensed which has a relatively high vapor pressure such as mercury. The capsule is typically disposed in a discharge tube envelope. The technique involves the use of a first light source imaged along the capsule and a second light source imaged across the capsule substantially transversely to the imaging of the first light source. Means are provided for constraining a segment of the capsule along its length with the constraining means being positioned to correspond with the imaging of the second light source. These light sources are preferably incandescent projection lamps. The constraining means is preferably a multiple looped wire support. 6 figs.

  19. Optimization of spatiotemporally fractionated radiotherapy treatments with bounds on the achievable benefit

    NASA Astrophysics Data System (ADS)

    Gaddy, Melissa R.; Yıldız, Sercan; Unkelbach, Jan; Papp, Dávid

    2018-01-01

    Spatiotemporal fractionation schemes, that is, treatments delivering different dose distributions in different fractions, can potentially lower treatment side effects without compromising tumor control. This can be achieved by hypofractionating parts of the tumor while delivering approximately uniformly fractionated doses to the surrounding tissue. Plan optimization for such treatments is based on biologically effective dose (BED); however, this leads to computationally challenging nonconvex optimization problems. Optimization methods that are in current use yield only locally optimal solutions, and it has hitherto been unclear whether these plans are close to the global optimum. We present an optimization framework to compute rigorous bounds on the maximum achievable normal tissue BED reduction for spatiotemporal plans. The approach is demonstrated on liver tumors, where the primary goal is to reduce mean liver BED without compromising any other treatment objective. The BED-based treatment plan optimization problems are formulated as quadratically constrained quadratic programming (QCQP) problems. First, a conventional, uniformly fractionated reference plan is computed using convex optimization. Then, a second, nonconvex, QCQP model is solved to local optimality to compute a spatiotemporally fractionated plan that minimizes mean liver BED, subject to the constraints that the plan is no worse than the reference plan with respect to all other planning goals. Finally, we derive a convex relaxation of the second model in the form of a semidefinite programming problem, which provides a rigorous lower bound on the lowest achievable mean liver BED. The method is presented on five cases with distinct geometries. The computed spatiotemporal plans achieve 12-35% mean liver BED reduction over the optimal uniformly fractionated plans. This reduction corresponds to 79-97% of the gap between the mean liver BED of the uniform reference plans and our lower bounds on the lowest achievable mean liver BED. The results indicate that spatiotemporal treatments can achieve substantial reductions in normal tissue dose and BED, and that local optimization techniques provide high-quality plans that are close to realizing the maximum potential normal tissue dose reduction.

  20. Parametric study of a canard-configured transport using conceptual design optimization

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.

    1985-01-01

    Constrained-parameter optimization is used to perform optimal conceptual design of both canard and conventional configurations of a medium-range transport. A number of design constants and design constraints are systematically varied to compare the sensitivities of canard and conventional configurations to a variety of technology assumptions. Main-landing-gear location and canard surface high-lift performance are identified as critical design parameters for a statically stable, subsonic, canard-configured transport.

  1. A robust approach to chance constrained optimal power flow with renewable generation

    DOE PAGES

    Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.

    2016-09-01

    Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less

  2. Spacecraft Mission Design for the Mitigation of the 2017 PDC Hypothetical Asteroid Threat

    NASA Technical Reports Server (NTRS)

    Barbee, Brent W.; Sarli, Bruno V.; Lyzhoft, Josh; Chodas, Paul W.; Englander, Jacob A.

    2017-01-01

    This paper presents a detailed mission design analysis results for the 2017 Planetary Defense Conference (PDC) Hypothetical Asteroid Impact Scenario, documented at https:cneos.jpl.nasa.govpdcspdc17. The mission design includes campaigns for both reconnaissance (flyby or rendezvous) of the asteroid (to characterize it and the nature of the threat it poses to Earth) and mitigation of the asteroid, via kinetic impactor deflection, nuclear explosive device (NED) deflection, or NED disruption. Relevant scenario parameters are varied to assess the sensitivity of the design outcome, such as asteroid bulk density, asteroid diameter, momentum enhancement factor, spacecraft launch vehicle, and mitigation system type. Different trajectory types are evaluated in the mission design process from purely ballistic to those involving optimal midcourse maneuvers, planetary gravity assists, and/or low-thrust solar electric propulsion. The trajectory optimization is targeted around peak deflection points that were found through a novel linear numerical technique method. The optimization process includes constrain parameters, such as Earth departure date, launch declination, spacecraft, asteroid relative velocity and solar phase angle, spacecraft dry mass, minimum/maximum spacecraft distances from Sun and Earth, and Earth-spacecraft communications line of sight. Results show that one of the best options for the 2017 PDC deflection is solar electric propelled rendezvous mission with a single spacecraft using NED for the deflection.

  3. System Synthesis in Preliminary Aircraft Design using Statistical Methods

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).

  4. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  5. Constrained Optimization Problems in Cost and Managerial Accounting--Spreadsheet Tools

    ERIC Educational Resources Information Center

    Amlie, Thomas T.

    2009-01-01

    A common problem addressed in Managerial and Cost Accounting classes is that of selecting an optimal production mix given scarce resources. That is, if a firm produces a number of different products, and is faced with scarce resources (e.g., limitations on labor, materials, or machine time), what combination of products yields the greatest profit…

  6. Distance Metric Learning via Iterated Support Vector Machines.

    PubMed

    Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei

    2017-07-11

    Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.

  7. Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms

    DOE PAGES

    Roald, Line Alnaes; Andersson, Goran

    2017-08-29

    Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less

  8. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  9. Exact and explicit optimal solutions for trajectory planning and control of single-link flexible-joint manipulators

    NASA Technical Reports Server (NTRS)

    Chen, Guanrong

    1991-01-01

    An optimal trajectory planning problem for a single-link, flexible joint manipulator is studied. A global feedback-linearization is first applied to formulate the nonlinear inequality-constrained optimization problem in a suitable way. Then, an exact and explicit structural formula for the optimal solution of the problem is derived and the solution is shown to be unique. It turns out that the optimal trajectory planning and control can be done off-line, so that the proposed method is applicable to both theoretical analysis and real time tele-robotics control engineering.

  10. Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks

    NASA Technical Reports Server (NTRS)

    Lee, Charles H.; Cheung, Kar-Ming

    2012-01-01

    In this paper, we propose to solve the constrained optimization problem in two phases. The first phase uses heuristic methods such as the ant colony method, particle swarming optimization, and genetic algorithm to seek a near optimal solution among a list of feasible initial populations. The final optimal solution can be found by using the solution of the first phase as the initial condition to the SQP algorithm. We demonstrate the above problem formulation and optimization schemes with a large-scale network that includes the DSN ground stations and a number of spacecraft of deep space missions.

  11. Design Optimization Toolkit: Users' Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro

    The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less

  12. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    PubMed Central

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  13. A constrained robust least squares approach for contaminant release history identification

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.

    2006-04-01

    Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.

  14. Planarity constrained multi-view depth map reconstruction for urban scenes

    NASA Astrophysics Data System (ADS)

    Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie

    2018-05-01

    Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.

  15. Constrained clusters of gene expression profiles with pathological features.

    PubMed

    Sese, Jun; Kurokawa, Yukinori; Monden, Morito; Kato, Kikuya; Morishita, Shinichi

    2004-11-22

    Gene expression profiles should be useful in distinguishing variations in disease, since they reflect accurately the status of cells. The primary clustering of gene expression reveals the genotypes that are responsible for the proximity of members within each cluster, while further clustering elucidates the pathological features of the individual members of each cluster. However, since the first clustering process and the second classification step, in which the features are associated with clusters, are performed independently, the initial set of clusters may omit genes that are associated with pathologically meaningful features. Therefore, it is important to devise a way of identifying gene expression clusters that are associated with pathological features. We present the novel technique of 'itemset constrained clustering' (IC-Clustering), which computes the optimal cluster that maximizes the interclass variance of gene expression between groups, which are divided according to the restriction that only divisions that can be expressed using common features are allowed. This constraint automatically labels each cluster with a set of pathological features which characterize that cluster. When applied to liver cancer datasets, IC-Clustering revealed informative gene expression clusters, which could be annotated with various pathological features, such as 'tumor' and 'man', or 'except tumor' and 'normal liver function'. In contrast, the k-means method overlooked these clusters.

  16. Wavelet evolutionary network for complex-constrained portfolio rebalancing

    NASA Astrophysics Data System (ADS)

    Suganya, N. C.; Vijayalakshmi Pai, G. A.

    2012-07-01

    Portfolio rebalancing problem deals with resetting the proportion of different assets in a portfolio with respect to changing market conditions. The constraints included in the portfolio rebalancing problem are basic, cardinality, bounding, class and proportional transaction cost. In this study, a new heuristic algorithm named wavelet evolutionary network (WEN) is proposed for the solution of complex-constrained portfolio rebalancing problem. Initially, the empirical covariance matrix, one of the key inputs to the problem, is estimated using the wavelet shrinkage denoising technique to obtain better optimal portfolios. Secondly, the complex cardinality constraint is eliminated using k-means cluster analysis. Finally, WEN strategy with logical procedures is employed to find the initial proportion of investment in portfolio of assets and also rebalance them after certain period. Experimental studies of WEN are undertaken on Bombay Stock Exchange, India (BSE200 index, period: July 2001-July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, period: March 2002-March 2007) data sets. The result obtained using WEN is compared with the only existing counterpart named Hopfield evolutionary network (HEN) strategy and also verifies that WEN performs better than HEN. In addition, different performance metrics and data envelopment analysis are carried out to prove the robustness and efficiency of WEN over HEN strategy.

  17. Estimating free-body modal parameters from tests of a constrained structure

    NASA Technical Reports Server (NTRS)

    Cooley, Victor M.

    1993-01-01

    Hardware advances in suspension technology for ground tests of large space structures provide near on-orbit boundary conditions for modal testing. Further advances in determining free-body modal properties of constrained large space structures have been made, on the analysis side, by using time domain parameter estimation and perturbing the stiffness of the constraints over multiple sub-tests. In this manner, passive suspension constraint forces, which are fully correlated and therefore not usable for spectral averaging techniques, are made effectively uncorrelated. The technique is demonstrated with simulated test data.

  18. Quantum dynamics by the constrained adiabatic trajectory method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclerc, A.; Jolicard, G.; Guerin, S.

    2011-03-15

    We develop the constrained adiabatic trajectory method (CATM), which allows one to solve the time-dependent Schroedinger equation constraining the dynamics to a single Floquet eigenstate, as if it were adiabatic. This constrained Floquet state (CFS) is determined from the Hamiltonian modified by an artificial time-dependent absorbing potential whose forms are derived according to the initial conditions. The main advantage of this technique for practical implementation is that the CFS is easy to determine even for large systems since its corresponding eigenvalue is well isolated from the others through its imaginary part. The properties and limitations of the CATM are exploredmore » through simple examples.« less

  19. Helicopter Control Energy Reduction Using Moving Horizontal Tail

    PubMed Central

    Oktay, Tugrul; Sal, Firat

    2015-01-01

    Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841

  20. Type-Separated Bytecode - Its Construction and Evaluation

    NASA Astrophysics Data System (ADS)

    Adler, Philipp; Amme, Wolfram

    A lot of constrained systems still use interpreters to run mobile applications written in Java. These interpreters demand for only a few resources. On the other hand, it is difficult to apply optimizations during the runtime of the application. Annotations could be used to achieve a simpler and faster code analysis, which would allow optimizations even for interpreters on constrained devices. Unfortunately, there is no viable way of transporting annotations to and verifying them at the code consumer. In this paper we present type-separated bytecode as an intermediate representation which allows to safely transport annotations as type-extensions. We have implemented several versions of this system and show that it is possible to obtain a performance comparable to Java Bytecode, even though we use a type-separated system with annotations.

  1. A chance-constrained stochastic approach to intermodal container routing problems.

    PubMed

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  2. A chance-constrained stochastic approach to intermodal container routing problems

    PubMed Central

    Zhao, Yi; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389

  3. Optimization by nonhierarchical asynchronous decomposition

    NASA Technical Reports Server (NTRS)

    Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.

    1992-01-01

    Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.

  4. An Optimization Framework for Dynamic, Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara

    2003-01-01

    Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.

  5. Simple summation rule for optimal fixation selection in visual search.

    PubMed

    Najemnik, Jiri; Geisler, Wilson S

    2009-06-01

    When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.

  6. An effective parameter optimization with radiation balance constraints in the CAM5

    NASA Astrophysics Data System (ADS)

    Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.

    2017-12-01

    Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.

  7. Matrix crack extension at a frictionally constrained fiber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Selvadurai, A.P.S.

    1994-07-01

    The paper presents the application of a boundary element scheme to the study of the behavior of a penny-shaped matrix crack which occurs at an isolated fiber which is frictionally constrained. An incremental technique is used to examine the progression of self similar extension of the matrix crack due to the axial straining of the composite region. The extension of the crack occurs at the attainment of the critical stress intensity factor in the crack opening mode. Iterative techniques are used to determine the extent to crack enlargement and the occurrence of slip and locked regions in the frictional fiber-matrixmore » interface. The studies illustrate the role of fiber-matrix interface friction on the development of stable cracks in such frictionally constrained zones. The methodologies are applied to typical isolated fiber configurations of interest to fragmentation tests.« less

  8. Optimization-based power management of hybrid power systems with applications in advanced hybrid electric vehicles and wind farms with battery storage

    NASA Astrophysics Data System (ADS)

    Borhan, Hoseinali

    Modern hybrid electric vehicles and many stationary renewable power generation systems combine multiple power generating and energy storage devices to achieve an overall system-level efficiency and flexibility which is higher than their individual components. The power or energy management control, "brain" of these "hybrid" systems, determines adaptively and based on the power demand the power split between multiple subsystems and plays a critical role in overall system-level efficiency. This dissertation proposes that a receding horizon optimal control (aka Model Predictive Control) approach can be a natural and systematic framework for formulating this type of power management controls. More importantly the dissertation develops new results based on the classical theory of optimal control that allow solving the resulting optimal control problem in real-time, in spite of the complexities that arise due to several system nonlinearities and constraints. The dissertation focus is on two classes of hybrid systems: hybrid electric vehicles in the first part and wind farms with battery storage in the second part. The first part of the dissertation proposes and fully develops a real-time optimization-based power management strategy for hybrid electric vehicles. Current industry practice uses rule-based control techniques with "else-then-if" logic and look-up maps and tables in the power management of production hybrid vehicles. These algorithms are not guaranteed to result in the best possible fuel economy and there exists a gap between their performance and a minimum possible fuel economy benchmark. Furthermore, considerable time and effort are spent calibrating the control system in the vehicle development phase, and there is little flexibility in real-time handling of constraints and re-optimization of the system operation in the event of changing operating conditions and varying parameters. In addition, a proliferation of different powertrain configurations may result in the need for repeated control system redesign. To address these shortcomings, we formulate the power management problem as a nonlinear and constrained optimal control problem. Solution of this optimal control problem in real-time on chronometric- and memory-constrained automotive microcontrollers is quite challenging; this computational complexity is due to the highly nonlinear dynamics of the powertrain subsystems, mixed-integer switching modes of their operation, and time-varying and nonlinear hard constraints that system variables should satisfy. The main contribution of the first part of the dissertation is that it establishes methods for systematic and step-by step improvements in fuel economy while maintaining the algorithmic computational requirements in a real-time implementable framework. More specifically a linear time-varying model predictive control approach is employed first which uses sequential quadratic programming to find sub-optimal solutions to the power management problem. Next the objective function is further refined and broken into a short and a long horizon segments; the latter approximated as a function of the state using the connection between the Pontryagin minimum principle and Hamilton-Jacobi-Bellman equations. The power management problem is then solved using a nonlinear MPC framework with a dynamic programming solver and the fuel economy is further improved. Typical simplifying academic assumptions are minimal throughout this work, thanks to close collaboration with research scientists at Ford research labs and their stringent requirement that the proposed solutions be tested on high-fidelity production models. Simulation results on a high-fidelity model of a hybrid electric vehicle over multiple standard driving cycles reveal the potential for substantial fuel economy gains. To address the control calibration challenges, we also present a novel and fast calibration technique utilizing parallel computing techniques. ^ The second part of this dissertation presents an optimization-based control strategy for the power management of a wind farm with battery storage. The strategy seeks to minimize the error between the power delivered by the wind farm with battery storage and the power demand from an operator. In addition, the strategy attempts to maximize battery life. The control strategy has two main stages. The first stage produces a family of control solutions that minimize the power error subject to the battery constraints over an optimization horizon. These solutions are parameterized by a given value for the state of charge at the end of the optimization horizon. The second stage screens the family of control solutions to select one attaining an optimal balance between power error and battery life. The battery life model used in this stage is a weighted Amp-hour (Ah) throughput model. The control strategy is modular, allowing for more sophisticated optimization models in the first stage, or more elaborate battery life models in the second stage. The strategy is implemented in real-time in the framework of Model Predictive Control (MPC).

  9. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  10. BATSE imaging survey of the Galactic plane

    NASA Technical Reports Server (NTRS)

    Grindlay, J. E.; Barret, D.; Bloser, P. F.; Zhang, S. N.; Robinson, C.; Harmon, B. A.

    1997-01-01

    The burst and transient source experiment (BATSE) onboard the Compton Gamma Ray Observatory (CGRO) provides all sky monitoring capability, occultation analysis and occultation imaging which enables new and fainter sources to be searched for in relatively crowded fields. The occultation imaging technique is used in combination with an automated BATSE image scanner, allowing an analysis of large data sets of occultation images for detections of candidate sources and for the construction of source catalogs and data bases. This automated image scanner system is being tested on archival data in order to optimize the search and detection thresholds. The image search system, its calibration results and preliminary survey results on archival data are reported on. The aim of the survey is to identify a complete sample of black hole candidates in the galaxy and constrain the number of black hole systems and neutron star systems.

  11. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  12. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  13. Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism

    NASA Astrophysics Data System (ADS)

    Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.

    2018-05-01

    The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.

  14. An Examination of Alternative Multidimensional Scaling Techniques

    ERIC Educational Resources Information Center

    Papazoglou, Sofia; Mylonas, Kostas

    2017-01-01

    The purpose of this study is to compare alternative multidimensional scaling (MDS) methods for constraining the stimuli on the circumference of a circle and on the surface of a sphere. Specifically, the existing MDS-T method for plotting the stimuli on the circumference of a circle is applied, and its extension is proposed for constraining the…

  15. Two-Phase Item Selection Procedure for Flexible Content Balancing in CAT

    ERIC Educational Resources Information Center

    Cheng, Ying; Chang, Hua-Hua; Yi, Qing

    2007-01-01

    Content balancing is an important issue in the design and implementation of computerized adaptive testing (CAT). Content-balancing techniques that have been applied in fixed content balancing, where the number of items from each content area is fixed, include constrained CAT (CCAT), the modified multinomial model (MMM), modified constrained CAT…

  16. A sequential solution for anisotropic total variation image denoising with interval constraints

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Noo, Frédéric

    2017-09-01

    We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.

  17. Estimation of Regional Carbon Balance from Atmospheric Observations

    NASA Astrophysics Data System (ADS)

    Denning, S.; Uliasz, M.; Skidmore, J.

    2002-12-01

    Variations in the concentration of CO2 and other trace gases in time and space contain information about sources and sinks at regional scales. Several methods have been developed to quantitatively extract this information from atmospheric measurements. Mass-balance techniques depend on the ability to repeatedly sample the same mass of air, which involves careful attention to airmass trajectories. Inverse and adjoint techniques rely on decomposition of the source field into quasi-independent "basis functions" that are propagated through transport models and then used to synthesize optimal linear combinations that best match observations. A recently proposed method for regional flux estimation from continuous measurements at tall towers relies on time-mean vertical gradients, and requires careful trajectory analysis to map the estimates onto regional ecosystems. Each of these techniques is likely to be applied to measurements made during the North American Carbon Program. We have also explored the use of Bayesian synthesis inversion at regional scales, using a Lagrangian particle dispersion model driven by mesoscale transport fields. Influence functions were calculated for each hypothetical observation in a realistic diurnally-varying flow. These influence functions were then treated as basis functions for the purpose of separate inversions for daytime photosynthesis and 24-hour mean ecosystem respiration. Our results highlight the importance of estimating CO2 fluxes through the lateral boundaries of the model. Respiration fluxes were well constrained by one or two hypothetical towers, regardless of inflow fluxes. Time-varying assimilation fluxes were less well constrained, and much more dependent on knowledge of inflow fluxes. The small net difference between respiration and photosynthesis was the most difficult to determine, being extremely sensitive to knowledge of inflow fluxes. Finally, we explored the feasibility of directly incorporating mid-day concentration values measured at surface-layer flux towers in global inversions for regional surface fluxes. We found that such data would substantially improve the observational constraint on current carbon cycle models, especially if applied selectively to a well-designed subset of the current network of flux towers.

  18. OPUS: Optimal Projection for Uncertain Systems. Volume 1

    DTIC Science & Technology

    1991-09-01

    unifiedI control- design methodology that directly addresses these technology issues. 1 In particular, optimal projection theory addresses the need for...effects, and limited identification accuracy in a 1-g environment. The principal contribution of OPUS is a unified design methodology that...characterizing solutions to constrained control- design problems. Transforming OPUS into a practi- cal design methodology requires the development of

  19. A novel constrained H2 optimization algorithm for mechatronics design in flexure-linked biaxial gantry.

    PubMed

    Ma, Jun; Chen, Si-Lu; Kamaldin, Nazir; Teo, Chek Sing; Tay, Arthur; Mamun, Abdullah Al; Tan, Kok Kiong

    2017-11-01

    The biaxial gantry is widely used in many industrial processes that require high precision Cartesian motion. The conventional rigid-link version suffers from breaking down of joints if any de-synchronization between the two carriages occurs. To prevent above potential risk, a flexure-linked biaxial gantry is designed to allow a small rotation angle of the cross-arm. Nevertheless, the chattering of control signals and inappropriate design of the flexure joint will possibly induce resonant modes of the end-effector. Thus, in this work, the design requirements in terms of tracking accuracy, biaxial synchronization, and resonant mode suppression are achieved by integrated optimization of the stiffness of flexures and PID controller parameters for a class of point-to-point reference trajectories with same dynamics but different steps. From here, an H 2 optimization problem with defined constraints is formulated, and an efficient iterative solver is proposed by hybridizing direct computation of constrained projection gradient and line search of optimal step. Comparative experimental results obtained on the testbed are presented to verify the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. High-Fidelity Multidisciplinary Design Optimization of Aircraft Configurations

    NASA Technical Reports Server (NTRS)

    Martins, Joaquim R. R. A.; Kenway, Gaetan K. W.; Burdette, David; Jonsson, Eirikur; Kennedy, Graeme J.

    2017-01-01

    To evaluate new airframe technologies we need design tools based on high-fidelity models that consider multidisciplinary interactions early in the design process. The overarching goal of this NRA is to develop tools that enable high-fidelity multidisciplinary design optimization of aircraft configurations, and to apply these tools to the design of high aspect ratio flexible wings. We develop a geometry engine that is capable of quickly generating conventional and unconventional aircraft configurations including the internal structure. This geometry engine features adjoint derivative computation for efficient gradient-based optimization. We also added overset capability to a computational fluid dynamics solver, complete with an adjoint implementation and semiautomatic mesh generation. We also developed an approach to constraining buffet and started the development of an approach for constraining utter. On the applications side, we developed a new common high-fidelity model for aeroelastic studies of high aspect ratio wings. We performed optimal design trade-o s between fuel burn and aircraft weight for metal, conventional composite, and carbon nanotube composite wings. We also assessed a continuous morphing trailing edge technology applied to high aspect ratio wings. This research resulted in the publication of 26 manuscripts so far, and the developed methodologies were used in two other NRAs. 1

  1. Reinforcement-Learning-Based Robust Controller Design for Continuous-Time Uncertain Nonlinear Systems Subject to Input Constraints.

    PubMed

    Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai

    2015-07-01

    The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.

  2. Constrained Optimization of Average Arrival Time via a Probabilistic Approach to Transport Reliability

    PubMed Central

    Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam

    2015-01-01

    To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902

  3. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

    NASA Technical Reports Server (NTRS)

    Brozenec, Thomas F.; Bender, Douglas J.

    1994-01-01

    Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is faster than all but a similarly specialized version of the QUEST algorithm. We also introduce a novel measurement averaging technique which reduces the n-measurement case to the two measurement case for our particular application, a star tracker and earth sensor mounted on an earth-pointed geosynchronous communications satellite. Using this technique, many n-measurement problems reduce to less than or equal to 3 measurements; this reduces the amount of required calculation without significant degradation in accuracy. Finally, we present the results of some tests which compare the least-squares algorithm with the QUEST and FOAM algorithms in the two-measurement case. For our example case, all three algorithms performed with similar accuracy.

  4. Improving approximate-optimized effective potentials by imposing exact conditions: Theory and applications to electronic statics and dynamics

    NASA Astrophysics Data System (ADS)

    Kurzweil, Yair; Head-Gordon, Martin

    2009-07-01

    We develop a method that can constrain any local exchange-correlation potential to preserve basic exact conditions. Using the method of Lagrange multipliers, we calculate for each set of given Kohn-Sham orbitals a constraint-preserving potential which is closest to the given exchange-correlation potential. The method is applicable to both the time-dependent (TD) and independent cases. The exact conditions that are enforced for the time-independent case are Galilean covariance, zero net force and torque, and Levy-Perdew virial theorem. For the time-dependent case we enforce translational covariance, zero net force, Levy-Perdew virial theorem, and energy balance. We test our method on the exchange (only) Krieger-Li-Iafrate (xKLI) approximate-optimized effective potential for both cases. For the time-independent case, we calculated the ground state properties of some hydrogen chains and small sodium clusters for some constrained xKLI potentials and Hartree-Fock (HF) exchange. The results (total energy, Kohn-Sham eigenvalues, polarizability, and hyperpolarizability) indicate that enforcing the exact conditions is not important for these cases. On the other hand, in the time-dependent case, constraining both energy balance and zero net force yields improved results relative to TDHF calculations. We explored the electron dynamics in small sodium clusters driven by cw laser pulses. For each laser pulse we compared calculations from TD constrained xKLI, TD partially constrained xKLI, and TDHF. We found that electron dynamics such as electron ionization and moment of inertia dynamics for the constrained xKLI are most similar to the TDHF results. Also, energy conservation is better by at least one order of magnitude with respect to the unconstrained xKLI. We also discuss the problems that arise in satisfying constraints in the TD case with a non-cw driving force.

  5. Improving approximate-optimized effective potentials by imposing exact conditions: Theory and applications to electronic statics and dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzweil, Yair; Head-Gordon, Martin

    2009-07-15

    We develop a method that can constrain any local exchange-correlation potential to preserve basic exact conditions. Using the method of Lagrange multipliers, we calculate for each set of given Kohn-Sham orbitals a constraint-preserving potential which is closest to the given exchange-correlation potential. The method is applicable to both the time-dependent (TD) and independent cases. The exact conditions that are enforced for the time-independent case are Galilean covariance, zero net force and torque, and Levy-Perdew virial theorem. For the time-dependent case we enforce translational covariance, zero net force, Levy-Perdew virial theorem, and energy balance. We test our method on the exchangemore » (only) Krieger-Li-Iafrate (xKLI) approximate-optimized effective potential for both cases. For the time-independent case, we calculated the ground state properties of some hydrogen chains and small sodium clusters for some constrained xKLI potentials and Hartree-Fock (HF) exchange. The results (total energy, Kohn-Sham eigenvalues, polarizability, and hyperpolarizability) indicate that enforcing the exact conditions is not important for these cases. On the other hand, in the time-dependent case, constraining both energy balance and zero net force yields improved results relative to TDHF calculations. We explored the electron dynamics in small sodium clusters driven by cw laser pulses. For each laser pulse we compared calculations from TD constrained xKLI, TD partially constrained xKLI, and TDHF. We found that electron dynamics such as electron ionization and moment of inertia dynamics for the constrained xKLI are most similar to the TDHF results. Also, energy conservation is better by at least one order of magnitude with respect to the unconstrained xKLI. We also discuss the problems that arise in satisfying constraints in the TD case with a non-cw driving force.« less

  6. Guidance and Control Architecture Design and Demonstration for Low Ballistic Coefficient Atmospheric Entry

    NASA Technical Reports Server (NTRS)

    Swei, Sean

    2014-01-01

    We propose to develop a robust guidance and control system for the ADEPT (Adaptable Deployable Entry and Placement Technology) entry vehicle. A control-centric model of ADEPT will be developed to quantify the performance of candidate guidance and control architectures for both aerocapture and precision landing missions. The evaluation will be based on recent breakthroughs in constrained controllability/reachability analysis of control systems and constrained-based energy-minimum trajectory optimization for guidance development operating in complex environments.

  7. Mode selective generation of guided waves by systematic optimization of the interfacial shear stress profile

    NASA Astrophysics Data System (ADS)

    Yazdanpanah Moghadam, Peyman; Quaegebeur, Nicolas; Masson, Patrice

    2015-01-01

    Piezoelectric transducers are commonly used in structural health monitoring systems to generate and measure ultrasonic guided waves (GWs) by applying interfacial shear and normal stresses to the host structure. In most cases, in order to perform damage detection, advanced signal processing techniques are required, since a minimum of two dispersive modes are propagating in the host structure. In this paper, a systematic approach for mode selection is proposed by optimizing the interfacial shear stress profile applied to the host structure, representing the first step of a global optimization of selective mode actuator design. This approach has the potential of reducing the complexity of signal processing tools as the number of propagating modes could be reduced. Using the superposition principle, an analytical method is first developed for GWs excitation by a finite number of uniform segments, each contributing with a given elementary shear stress profile. Based on this, cost functions are defined in order to minimize the undesired modes and amplify the selected mode and the optimization problem is solved with a parallel genetic algorithm optimization framework. Advantages of this method over more conventional transducers tuning approaches are that (1) the shear stress can be explicitly optimized to both excite one mode and suppress other undesired modes, (2) the size of the excitation area is not constrained and mode-selective excitation is still possible even if excitation width is smaller than all excited wavelengths, and (3) the selectivity is increased and the bandwidth extended. The complexity of the optimal shear stress profile obtained is shown considering two cost functions with various optimal excitation widths and number of segments. Results illustrate that the desired mode (A0 or S0) can be excited dominantly over other modes up to a wave power ratio of 1010 using an optimal shear stress profile.

  8. Cement-in-cement acetabular revision with a constrained tripolar component.

    PubMed

    Leonidou, Andreas; Pagkalos, Joseph; Luscombe, Jonathan

    2012-02-17

    Dislocation of a total hip replacement (THR) is common following total hip arthroplasty (THA). When nonoperative management fails to maintain reduction, revision surgery is considered. The use of constrained acetabular liners has been extensively described. Complete removal of the old cement mantle during revision THA can be challenging and is associated with significant complications. Cement-in-cement revision is an established technique. However, the available clinical and experimental studies focus on femoral stem revision. The purpose of this study was to present a case of cement-in-cement acetabular revision with a constrained component for recurrent dislocations and to investigate the current best evidence for this technique. This article describes the case of a 74-year-old woman who underwent revision of a Charnley THR for recurrent low-energy dislocations. A tripolar constrained acetabular component was cemented over the primary cement mantle following removal of the original liner by reaming, roughening the surface, and thoroughly irrigating and drying the primary cement. Clinical and radiological results were good, with the Oxford Hip Score improving from 11 preoperatively to 24 at 6 months postoperatively. The good short-term results of this case and the current clinical and biomechanical data encourage the use of the cement-in-cement technique for acetabular revision. Careful irrigation, drying, and roughening of the primary surface are necessary. Copyright 2012, SLACK Incorporated.

  9. Evaluating an image-fusion algorithm with synthetic-image-generation tools

    NASA Astrophysics Data System (ADS)

    Gross, Harry N.; Schott, John R.

    1996-06-01

    An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.

  10. Minimum energy control and optimal-satisfactory control of Boolean control network

    NASA Astrophysics Data System (ADS)

    Li, Fangfei; Lu, Xiwen

    2013-12-01

    In the literatures, to transfer the Boolean control network from the initial state to the desired state, the expenditure of energy has been rarely considered. Motivated by this, this Letter investigates the minimum energy control and optimal-satisfactory control of Boolean control network. Based on the semi-tensor product of matrices and Floyd's algorithm, minimum energy, constrained minimum energy and optimal-satisfactory control design for Boolean control network are given respectively. A numerical example is presented to illustrate the efficiency of the obtained results.

  11. Optimal dynamic control of invasions: applying a systematic conservation approach.

    PubMed

    Adams, Vanessa M; Setterfield, Samantha A

    2015-06-01

    The social, economic, and environmental impacts of invasive plants are well recognized. However, these variable impacts are rarely accounted for in the spatial prioritization of funding for weed management. We examine how current spatially explicit prioritization methods can be extended to identify optimal budget allocations to both eradication and control measures of invasive species to minimize the costs and likelihood of invasion. Our framework extends recent approaches to systematic prioritization of weed management to account for multiple values that are threatened by weed invasions with a multi-year dynamic prioritization approach. We apply our method to the northern portion of the Daly catchment in the Northern Territory, which has significant conservation values that are threatened by gamba grass (Andropogon gayanus), a highly invasive species recognized by the Australian government as a Weed of National Significance (WONS). We interface Marxan, a widely applied conservation planning tool, with a dynamic biophysical model of gamba grass to optimally allocate funds to eradication and control programs under two budget scenarios comparing maximizing gain (MaxGain) and minimizing loss (MinLoss) optimization approaches. The prioritizations support previous findings that a MinLoss approach is a better strategy when threats are more spatially variable than conservation values. Over a 10-year simulation period, we find that a MinLoss approach reduces future infestations by ~8% compared to MaxGain in the constrained budget scenarios and ~12% in the unlimited budget scenarios. We find that due to the extensive current invasion and rapid rate of spread, allocating the annual budget to control efforts is more efficient than funding eradication efforts when there is a constrained budget. Under a constrained budget, applying the most efficient optimization scenario (control, minloss) reduces spread by ~27% compared to no control. Conversely, if the budget is unlimited it is more efficient to fund eradication efforts and reduces spread by ~65% compared to no control.

  12. Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2015-07-01

    In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.

  13. An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.

  14. MIDAS - Mission design and analysis software for the optimization of ballistic interplanetary trajectories

    NASA Technical Reports Server (NTRS)

    Sauer, Carl G., Jr.

    1989-01-01

    A patched conic trajectory optimization program MIDAS is described that was developed to investigate a wide variety of complex ballistic heliocentric transfer trajectories. MIDAS includes the capability of optimizing trajectory event times such as departure date, arrival date, and intermediate planetary flyby dates and is able to both add and delete deep space maneuvers when dictated by the optimization process. Both powered and unpowered flyby or gravity assist trajectories of intermediate bodies can be handled and capability is included to optimize trajectories having a rendezvous with an intermediate body such as for a sample return mission. Capability is included in the optimization process to constrain launch energy and launch vehicle parking orbit parameters.

  15. Comparison of Optimal Small Spacecraft Micro Electric Propulsion Technologies for Mission Opportunities

    NASA Technical Reports Server (NTRS)

    Spangelo, Sara

    2015-01-01

    The goal of this paper is to explore the mission opportunities that are uniquely enabled by U-class Solar Electric Propulsion (SEP) technologies. Small SEP thrusters offers significant advantages relative to existing technologies and will revolutionize the class of mission architectures that small spacecraft can accomplish by enabling trajectory maneuvers with significant change in velocity requirements and reaction wheel-free attitude control. This paper aims to develop and apply a common system-level modeling framework to evaluate these thrusters for relevant upcoming mission scenarios, taking into account the mass, power, volume, and operational constraints of small highly-constrained missions. We will identify the optimal technology for broad classes of mission applications for different U-class spacecraft sizes and provide insights into what constrains the system performance to identify technology areas where improvements are needed.

  16. An expert system for choosing the best combination of options in a general purpose program for automated design synthesis

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Barthelemy, J.-F. M.

    1986-01-01

    An expert system called EXADS has been developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. ADS has approximately 100 combinations of strategy, optimizer, and one-dimensional search options from which to choose. It is difficult for a nonexpert to make this choice. This expert system aids the user in choosing the best combination of options based on the users knowledge of the problem and the expert knowledge stored in the knowledge base. The knowledge base is divided into three categories; constrained problems, unconstrained problems, and constrained problems being treated as unconstrained problems. The inference engine and rules are written in LISP, contains about 200 rules, and executes on DEC-VAX (with Franz-LISP) and IBM PC (with IQ-LISP) computers.

  17. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  18. A comparison of optimization algorithms for localized in vivo B0 shimming.

    PubMed

    Nassirpour, Sahar; Chang, Paul; Fillmer, Ariane; Henning, Anke

    2018-02-01

    To compare several different optimization algorithms currently used for localized in vivo B 0 shimming, and to introduce a novel, fast, and robust constrained regularized algorithm (ConsTru) for this purpose. Ten different optimization algorithms (including samples from both generic and dedicated least-squares solvers, and a novel constrained regularized inversion method) were implemented and compared for shimming in five different shimming volumes on 66 in vivo data sets from both 7 T and 9.4 T. The best algorithm was chosen to perform single-voxel spectroscopy at 9.4 T in the frontal cortex of the brain on 10 volunteers. The results of the performance tests proved that the shimming algorithm is prone to unstable solutions if it depends on the value of a starting point, and is not regularized to handle ill-conditioned problems. The ConsTru algorithm proved to be the most robust, fast, and efficient algorithm among all of the chosen algorithms. It enabled acquisition of spectra of reproducible high quality in the frontal cortex at 9.4 T. For localized in vivo B 0 shimming, the use of a dedicated linear least-squares solver instead of a generic nonlinear one is highly recommended. Among all of the linear solvers, the constrained regularized method (ConsTru) was found to be both fast and most robust. Magn Reson Med 79:1145-1156, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Communication: phase transitions, criticality, and three-phase coexistence in constrained cell models.

    PubMed

    Nayhouse, Michael; Kwon, Joseph Sang-Il; Orkoulas, G

    2012-05-28

    In simulation studies of fluid-solid transitions, the solid phase is usually modeled as a constrained system in which each particle is confined to move in a single Wigner-Seitz cell. The constrained cell model has been used in the determination of fluid-solid coexistence via thermodynamic integration and other techniques. In the present work, the phase diagram of such a constrained system of Lennard-Jones particles is determined from constant-pressure simulations. The pressure-density isotherms exhibit inflection points which are interpreted as the mechanical stability limit of the solid phase. The phase diagram of the constrained system contains a critical and a triple point. The temperature and pressure at the critical and the triple point are both higher than those of the unconstrained system due to the reduction in the entropy caused by the single occupancy constraint.

  20. Constrained Quantum Mechanics: Chaos in Non-Planar Billiards

    ERIC Educational Resources Information Center

    Salazar, R.; Tellez, G.

    2012-01-01

    We illustrate some of the techniques to identify chaos signatures at the quantum level using as guiding examples some systems where a particle is constrained to move on a radial symmetric, but non-planar, surface. In particular, two systems are studied: the case of a cone with an arbitrary contour or "dunce hat billiard" and the rectangular…

  1. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    PubMed

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  2. Image coding using entropy-constrained residual vector quantization

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.

    1993-01-01

    The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.

  3. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    NASA Astrophysics Data System (ADS)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone. Bayesian inversion is then applied to assign scaling factors that align the surface fluxes with the CO2 time series. Our project demonstrates how bottom-up and top-down techniques can be reconciled to arrive at a more robust and balanced spatial carbon budget. We will show how to evaluate existing flux products through regionally representative atmospheric observations, i.e. how well the underlying model assumptions represent processes on the regional scale. Adapting process model parameterizations sets for e.g. sub-regions, disturbance regimes, or land cover classes, in order to optimize the agreement between surface fluxes and atmospheric observations can lead to improved understanding of the underlying flux mechanisms, and reduces uncertainties in the regional carbon budgets.

  4. Inexact fuzzy-stochastic mixed-integer programming approach for long-term planning of waste management--Part A: methodology.

    PubMed

    Guo, P; Huang, G H

    2009-01-01

    In this study, an inexact fuzzy chance-constrained two-stage mixed-integer linear programming (IFCTIP) approach is proposed for supporting long-term planning of waste-management systems under multiple uncertainties in the City of Regina, Canada. The method improves upon the existing inexact two-stage programming and mixed-integer linear programming techniques by incorporating uncertainties expressed as multiple uncertainties of intervals and dual probability distributions within a general optimization framework. The developed method can provide an effective linkage between the predefined environmental policies and the associated economic implications. Four special characteristics of the proposed method make it unique compared with other optimization techniques that deal with uncertainties. Firstly, it provides a linkage to predefined policies that have to be respected when a modeling effort is undertaken; secondly, it is useful for tackling uncertainties presented as intervals, probabilities, fuzzy sets and their incorporation; thirdly, it facilitates dynamic analysis for decisions of facility-expansion planning and waste-flow allocation within a multi-facility, multi-period, multi-level, and multi-option context; fourthly, the penalties are exercised with recourse against any infeasibility, which permits in-depth analyses of various policy scenarios that are associated with different levels of economic consequences when the promised solid waste-generation rates are violated. In a companion paper, the developed method is applied to a real case for the long-term planning of waste management in the City of Regina, Canada.

  5. Dynamic replanning on demand of UAS constellations performing ISR missions

    NASA Astrophysics Data System (ADS)

    Stouch, Daniel W.; Zeidman, Ernest; Callahan, William; McGraw, Kirk

    2011-05-01

    Unmanned aerial systems (UAS) have proven themselves to be indispensable in providing intelligence, surveillance, and reconnaissance (ISR) over the battlefield. Constellations of heterogeneous, multi-purpose UAS are being tasked to provide ISR in an unpredictable environment. This necessitates the dynamic replanning of critical missions as weather conditions change, new observation targets are identified, aircraft are lost or equipment malfunctions, and new airspace restrictions are introduced. We present a method to generate coordinated mission plans for constellations of UAS with multiple flight goals and potentially competing objectives, and update them on demand as the operational situation changes. We use a fast evolutionary algorithm-based, multi-objective optimization technique. The updated flight routes maintain continuity by considering where the ISR assets have already flown and where they still need to go. Both the initial planning and replanning take into account factors such as area of analysis coverage, restricted operating zones, maximum control station range, adverse weather effects, military terrain value, and sensor performance. Our results demonstrate that by constraining the space of potential solutions using an intelligently-formed air maneuver network with a subset of potential airspace corridors and navigational waypoints, we can ensure global optimization for multiple objectives considering the situation both before and after the replanning is initiated. We employ sophisticated visualization techniques using a geographic information system to help the user 'look under the hood" of the algorithms to understand the effectiveness and viability of the generated ISR mission plans and identify potential gaps in coverage.

  6. Correction method for stripe nonuniformity.

    PubMed

    Qian, Weixian; Chen, Qian; Gu, Guohua; Guan, Zhiqiang

    2010-04-01

    Stripe nonuniformity is very typical in line infrared focal plane arrays (IR-FPA) and uncooled staring IR-FPA. In this paper, the mechanism of the stripe nonuniformity is analyzed, and the gray-scale co-occurrence matrix theory and optimization theory are studied. Through these efforts, the stripe nonuniformity correction problem is translated into the optimization problem. The goal of the optimization is to find the minimal energy of the image's line gradient. After solving the constrained nonlinear optimization equation, the parameters of the stripe nonuniformity correction are obtained and the stripe nonuniformity correction is achieved. The experiments indicate that this algorithm is effective and efficient.

  7. The use of single-date MODIS imagery for estimating large-scale urban impervious surface fraction with spectral mixture analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Wu, Changshan

    2013-12-01

    Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.

  8. Shape and Spatially-Varying Reflectance Estimation from Virtual Exemplars.

    PubMed

    Hui, Zhuo; Sankaranarayanan, Aswin C

    2017-10-01

    This paper addresses the problem of estimating the shape of objects that exhibit spatially-varying reflectance. We assume that multiple images of the object are obtained under a fixed view-point and varying illumination, i.e., the setting of photometric stereo. At the core of our techniques is the assumption that the BRDF at each pixel lies in the non-negative span of a known BRDF dictionary. This assumption enables a per-pixel surface normal and BRDF estimation framework that is computationally tractable and requires no initialization in spite of the underlying problem being non-convex. Our estimation framework first solves for the surface normal at each pixel using a variant of example-based photometric stereo. We design an efficient multi-scale search strategy for estimating the surface normal and subsequently, refine this estimate using a gradient descent procedure. Given the surface normal estimate, we solve for the spatially-varying BRDF by constraining the BRDF at each pixel to be in the span of the BRDF dictionary; here, we use additional priors to further regularize the solution. A hallmark of our approach is that it does not require iterative optimization techniques nor the need for careful initialization, both of which are endemic to most state-of-the-art techniques. We showcase the performance of our technique on a wide range of simulated and real scenes where we outperform competing methods.

  9. An interval chance-constrained fuzzy modeling approach for supporting land-use planning and eco-environment planning at a watershed level.

    PubMed

    Ou, Guoliang; Tan, Shukui; Zhou, Min; Lu, Shasha; Tao, Yinghui; Zhang, Zuo; Zhang, Lu; Yan, Danping; Guan, Xingliang; Wu, Gang

    2017-12-15

    An interval chance-constrained fuzzy land-use allocation (ICCF-LUA) model is proposed in this study to support solving land resource management problem associated with various environmental and ecological constraints at a watershed level. The ICCF-LUA model is based on the ICCF (interval chance-constrained fuzzy) model which is coupled with interval mathematical model, chance-constrained programming model and fuzzy linear programming model and can be used to deal with uncertainties expressed as intervals, probabilities and fuzzy sets. Therefore, the ICCF-LUA model can reflect the tradeoff between decision makers and land stakeholders, the tradeoff between the economical benefits and eco-environmental demands. The ICCF-LUA model has been applied to the land-use allocation of Wujiang watershed, Guizhou Province, China. The results indicate that under highly land suitable conditions, optimized area of cultivated land, forest land, grass land, construction land, water land, unused land and landfill in Wujiang watershed will be [5015, 5648] hm 2 , [7841, 7965] hm 2 , [1980, 2056] hm 2 , [914, 1423] hm 2 , [70, 90] hm 2 , [50, 70] hm 2 and [3.2, 4.3] hm 2 , the corresponding system economic benefit will be between 6831 and 7219 billion yuan. Consequently, the ICCF-LUA model can effectively support optimized land-use allocation problem in various complicated conditions which include uncertainties, risks, economic objective and eco-environmental constraints. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Top-down constraints on global N2O emissions at optimal resolution: application of a new dimension reduction technique

    NASA Astrophysics Data System (ADS)

    Wells, Kelley C.; Millet, Dylan B.; Bousserez, Nicolas; Henze, Daven K.; Griffis, Timothy J.; Chaliyakunnel, Sreelekha; Dlugokencky, Edward J.; Saikawa, Eri; Xiang, Gao; Prinn, Ronald G.; O'Doherty, Simon; Young, Dickon; Weiss, Ray F.; Dutton, Geoff S.; Elkins, James W.; Krummel, Paul B.; Langenfelds, Ray; Steele, L. Paul

    2018-01-01

    We present top-down constraints on global monthly N2O emissions for 2011 from a multi-inversion approach and an ensemble of surface observations. The inversions employ the GEOS-Chem adjoint and an array of aggregation strategies to test how well current observations can constrain the spatial distribution of global N2O emissions. The strategies include (1) a standard 4D-Var inversion at native model resolution (4° × 5°), (2) an inversion for six continental and three ocean regions, and (3) a fast 4D-Var inversion based on a novel dimension reduction technique employing randomized singular value decomposition (SVD). The optimized global flux ranges from 15.9 Tg N yr-1 (SVD-based inversion) to 17.5-17.7 Tg N yr-1 (continental-scale, standard 4D-Var inversions), with the former better capturing the extratropical N2O background measured during the HIAPER Pole-to-Pole Observations (HIPPO) airborne campaigns. We find that the tropics provide a greater contribution to the global N2O flux than is predicted by the prior bottom-up inventories, likely due to underestimated agricultural and oceanic emissions. We infer an overestimate of natural soil emissions in the extratropics and find that predicted emissions are seasonally biased in northern midlatitudes. Here, optimized fluxes exhibit a springtime peak consistent with the timing of spring fertilizer and manure application, soil thawing, and elevated soil moisture. Finally, the inversions reveal a major emission underestimate in the US Corn Belt in the bottom-up inventory used here. We extensively test the impact of initial conditions on the analysis and recommend formally optimizing the initial N2O distribution to avoid biasing the inferred fluxes. We find that the SVD-based approach provides a powerful framework for deriving emission information from N2O observations: by defining the optimal resolution of the solution based on the information content of the inversion, it provides spatial information that is lost when aggregating to political or geographic regions, while also providing more temporal information than a standard 4D-Var inversion.

  11. Valuing hydrological alteration in multi-objective water resources management

    NASA Astrophysics Data System (ADS)

    Bizzi, Simone; Pianosi, Francesca; Soncini-Sessa, Rodolfo

    2012-11-01

    SummaryThe management of water through the impoundment of rivers by dams and reservoirs is necessary to support key human activities such as hydropower production, agriculture and flood risk mitigation. Advances in multi-objective optimization techniques and ever growing computing power make it possible to design reservoir operating policies that represent Pareto-optimal tradeoffs between multiple interests. On the one hand, such optimization methods can enhance performances of commonly targeted objectives (such as hydropower production or water supply), on the other hand they risk strongly penalizing all the interests not directly (i.e. mathematically) included in the optimization algorithm. The alteration of the downstream hydrological regime is a well established cause of ecological degradation and its evaluation and rehabilitation is commonly required by recent legislation (as the Water Framework Directive in Europe). However, it is rarely embedded in reservoir optimization routines and, even when explicitly considered, the criteria adopted for its evaluation are doubted and not commonly trusted, undermining the possibility of real implementation of environmentally friendly policies. The main challenges in defining and assessing hydrological alterations are: how to define a reference state (referencing); how to define criteria upon which to build mathematical indicators of alteration (measuring); and finally how to aggregate the indicators in a single evaluation index (valuing) that can serve as objective function in the optimization problem. This paper aims to address these issues by: (i) discussing the benefits and constrains of different approaches to referencing, measuring and valuing hydrological alteration; (ii) testing two alternative indices of hydrological alteration, one based on the established framework of Indicators of Hydrological Alteration (Richter et al., 1996), and one satisfying the mathematical properties required by widely used optimization methods based on dynamic programming; (iii) demonstrating and discussing these indices by application River Ticino, in Italy; (iv) providing a framework to effectively include hydrological alteration within reservoir operation optimization.

  12. Towards improving searches for optimal phylogenies.

    PubMed

    Ford, Eric; St John, Katherine; Wheeler, Ward C

    2015-01-01

    Finding the optimal evolutionary history for a set of taxa is a challenging computational problem, even when restricting possible solutions to be "tree-like" and focusing on the maximum-parsimony optimality criterion. This has led to much work on using heuristic tree searches to find approximate solutions. We present an approach for finding exact optimal solutions that employs and complements the current heuristic methods for finding optimal trees. Given a set of taxa and a set of aligned sequences of characters, there may be subsets of characters that are compatible, and for each such subset there is an associated (possibly partially resolved) phylogeny with edges corresponding to each character state change. These perfect phylogenies serve as anchor trees for our constrained search space. We show that, for sequences with compatible sites, the parsimony score of any tree [Formula: see text] is at least the parsimony score of the anchor trees plus the number of inferred changes between [Formula: see text] and the anchor trees. As the maximum-parsimony optimality score is additive, the sum of the lower bounds on compatible character partitions provides a lower bound on the complete alignment of characters. This yields a region in the space of trees within which the best tree is guaranteed to be found; limiting the search for the optimal tree to this region can significantly reduce the number of trees that must be examined in a search of the space of trees. We analyze this method empirically using four different biological data sets as well as surveying 400 data sets from the TreeBASE repository, demonstrating the effectiveness of our technique in reducing the number of steps in exact heuristic searches for trees under the maximum-parsimony optimality criterion. © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  14. Multi-Mbar Ramp Compression of Copper

    NASA Astrophysics Data System (ADS)

    Kraus, Rick; Davis, Jean-Paul; Seagle, Christopher; Fratanduono, Dayne; Swift, Damian; Eggert, Jon; Collins, Gilbert

    2015-06-01

    The cold curve is a critical component of equation of state models. Diamond anvil cell measurements can be used to determine isotherms, but these have generally been limited to pressures below 1 Mbar. The cold curve can also be extracted from Hugoniot data, but only with assumptions about the thermal pressure. As the National Ignition Facility will be using copper as an ablator material at pressures in excess of 10 Mbar, we need a better understanding of the high-density equation of state. Here we present ramp-wave compression experiments at the Sandia Z-Machine that we have used to constrain the isentrope of copper to a stress state of nearly 5 Mbar. We use the iterative Lagrangian analysis technique, developed by Rothman and Maw, to determine the stress-strain path. We also present a new iterative forward analysis (IFA) technique coupled to the ARES hydrocode that performs a non-linear optimization over the pressure drive and equation of state in order to match the free surface velocities. The IFA technique is an advantage over iterative Lagrangian analysis for experiments with growing shocks or systems with time dependent strength, which violate the assumptions of iterative Lagrangian analysis. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  15. Multi-objective vs. single-objective calibration of a hydrologic model using single- and multi-objective screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Shafii, Mahyar; Zink, Matthias; Schäfer, David; Thober, Stephan; Samaniego, Luis; Tolson, Bryan

    2016-04-01

    Hydrologic models are traditionally calibrated against observed streamflow. Recent studies have shown however, that only a few global model parameters are constrained using this kind of integral signal. They can be identified using prior screening techniques. Since different objectives might constrain different parameters, it is advisable to use multiple information to calibrate those models. One common approach is to combine these multiple objectives (MO) into one single objective (SO) function and allow the use of a SO optimization algorithm. Another strategy is to consider the different objectives separately and apply a MO Pareto optimization algorithm. In this study, two major research questions will be addressed: 1) How do multi-objective calibrations compare with corresponding single-objective calibrations? 2) How much do calibration results deteriorate when the number of calibrated parameters is reduced by a prior screening technique? The hydrologic model employed in this study is a distributed hydrologic model (mHM) with 52 model parameters, i.e. transfer coefficients. The model uses grid cells as a primary hydrologic unit, and accounts for processes like snow accumulation and melting, soil moisture dynamics, infiltration, surface runoff, evapotranspiration, subsurface storage and discharge generation. The model is applied in three distinct catchments over Europe. The SO calibrations are performed using the Dynamically Dimensioned Search (DDS) algorithm with a fixed budget while the MO calibrations are achieved using the Pareto Dynamically Dimensioned Search (PA-DDS) algorithm allowing for the same budget. The two objectives used here are the Nash Sutcliffe Efficiency (NSE) of the simulated streamflow and the NSE of the logarithmic transformation. It is shown that the SO DDS results are located close to the edges of the Pareto fronts of the PA-DDS. The MO calibrations are hence preferable due to their supply of multiple equivalent solutions from which the user can choose at the end due to the specific needs. The sequential single-objective parameter screening was employed prior to the calibrations reducing the number of parameters by at least 50% in the different catchments and for the different single objectives. The single-objective calibrations led to a faster convergence of the objectives and are hence beneficial when using a DDS on single-objectives. The above mentioned parameter screening technique is generalized for multi-objectives and applied before calibration using the PA-DDS algorithm. Two different alternatives of this MO-screening are tested. The comparison of the calibration results using all parameters and using only screened parameters shows for both alternatives that the PA-DDS algorithm does not profit in terms of trade-off size and function evaluations required to achieve converged pareto fronts. This is because the PA-DDS algorithm automatically reduces search space with progress of the calibration run. This automatic reduction should be different for other search algorithms. It is therefore hypothesized that prior screening can but must not be beneficial for parameter estimation dependent on the chosen optimization algorithm.

  16. A Novel Optimal Joint Resource Allocation Method in Cooperative Multicarrier Networks: Theory and Practice

    PubMed Central

    Gao, Yuan; Zhou, Weigui; Ao, Hong; Chu, Jian; Zhou, Quan; Zhou, Bo; Wang, Kang; Li, Yi; Xue, Peng

    2016-01-01

    With the increasing demands for better transmission speed and robust quality of service (QoS), the capacity constrained backhaul gradually becomes a bottleneck in cooperative wireless networks, e.g., in the Internet of Things (IoT) scenario in joint processing mode of LTE-Advanced Pro. This paper focuses on resource allocation within capacity constrained backhaul in uplink cooperative wireless networks, where two base stations (BSs) equipped with single antennae serve multiple single-antennae users via multi-carrier transmission mode. In this work, we propose a novel cooperative transmission scheme based on compress-and-forward with user pairing to solve the joint mixed integer programming problem. To maximize the system capacity under the limited backhaul, we formulate the joint optimization problem of user sorting, subcarrier mapping and backhaul resource sharing among different pairs (subcarriers for users). A novel robust and efficient centralized algorithm based on alternating optimization strategy and perfect mapping is proposed. Simulations show that our novel method can improve the system capacity significantly under the constraint of the backhaul resource compared with the blind alternatives. PMID:27077865

  17. An approximation function for frequency constrained structural optimization

    NASA Technical Reports Server (NTRS)

    Canfield, R. A.

    1989-01-01

    The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.

  18. Classification-Assisted Memetic Algorithms for Equality-Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Handoko, Stephanus Daniel; Kwoh, Chee Keong; Ong, Yew Soon

    Regressions has successfully been incorporated into memetic algorithm (MA) to build surrogate models for the objective or constraint landscape of optimization problems. This helps to alleviate the needs for expensive fitness function evaluations by performing local refinements on the approximated landscape. Classifications can alternatively be used to assist MA on the choice of individuals that would experience refinements. Support-vector-assisted MA were recently proposed to alleviate needs for function evaluations in the inequality-constrained optimization problems by distinguishing regions of feasible solutions from those of the infeasible ones based on some past solutions such that search efforts can be focussed on some potential regions only. For problems having equality constraints, however, the feasible space would obviously be extremely small. It is thus extremely difficult for the global search component of the MA to produce feasible solutions. Hence, the classification of feasible and infeasible space would become ineffective. In this paper, a novel strategy to overcome such limitation is proposed, particularly for problems having one and only one equality constraint. The raw constraint value of an individual, instead of its feasibility class, is utilized in this work.

  19. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  20. Spacecraft Mission Design for the Mitigation of the 2017 PDC Hypothetical Asteroid Threat

    NASA Technical Reports Server (NTRS)

    Barbee, Brent W.; Sarli, Bruno V.; Lyzhoft, Joshua; Chodas, Paul W.; Englander, Jacob A.

    2017-01-01

    This paper presents a detailed mission design analysis results for the 2017 Planetary Defense Conference (PDC) Hypothetical Asteroid Impact Scenario, documented at https://cneos.jpl.nasa.gov/ pd/cs/pdc17/. The mission design includes campaigns for both reconnaissance (flyby or rendezvous) of the asteroid (to characterize it and the nature of the threat it poses to Earth) and mitigation of the asteroid, via kinetic impactor deflection, nuclear explosive device (NED) deflection, or NED disruption. Relevant scenario parameters are varied to assess the sensitivity of the design outcome, such as asteroid bulk density, asteroid diameter, momentum enhancement factor, spacecraft launch vehicle, and mitigation system type. Different trajectory types are evaluated in the mission design process from purely ballistic to those involving optimal midcourse maneuvers, planetary gravity assists, and/or lowthrust solar electric propulsion. The trajectory optimization is targeted around peak deflection points that were found through a novel linear numerical technique method. The optimization process includes constrain parameters, such as Earth departure date, launch declination, spacecraft/asteroid relative velocity and solar phase angle, spacecraft dry mass, minimum/maximum spacecraft distances from Sun and Earth, and Earth/spacecraft communications line of sight. Results show that one of the best options for the 2017 PDC deflection is solar electric propelled rendezvous mission with a single spacecraft using NED for the deflection

Top