NASA Astrophysics Data System (ADS)
Rocha, Ana Maria A. C.; Costa, M. Fernanda P.; Fernandes, Edite M. G. P.
2016-12-01
This article presents a shifted hyperbolic penalty function and proposes an augmented Lagrangian-based algorithm for non-convex constrained global optimization problems. Convergence to an ?-global minimizer is proved. At each iteration k, the algorithm requires the ?-global minimization of a bound constrained optimization subproblem, where ?. The subproblems are solved by a stochastic population-based metaheuristic that relies on the artificial fish swarm paradigm and a two-swarm strategy. To enhance the speed of convergence, the algorithm invokes the Nelder-Mead local search with a dynamically defined probability. Numerical experiments with benchmark functions and engineering design problems are presented. The results show that the proposed shifted hyperbolic augmented Lagrangian compares favorably with other deterministic and stochastic penalty-based methods.
Order-Constrained Solutions in K-Means Clustering: Even Better than Being Globally Optimal
ERIC Educational Resources Information Center
Steinley, Douglas; Hubert, Lawrence
2008-01-01
This paper proposes an order-constrained K-means cluster analysis strategy, and implements that strategy through an auxiliary quadratic assignment optimization heuristic that identifies an initial object order. A subsequent dynamic programming recursion is applied to optimally subdivide the object set subject to the order constraint. We show that…
NASA Astrophysics Data System (ADS)
Chandra, Rishabh
Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.
Acceleration techniques in the univariate Lipschitz global optimization
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.; De Franco, Angela
2016-10-01
Univariate box-constrained Lipschitz global optimization problems are considered in this contribution. Geometric and information statistical approaches are presented. The novel powerful local tuning and local improvement techniques are described in the contribution as well as the traditional ways to estimate the Lipschitz constant. The advantages of the presented local tuning and local improvement techniques are demonstrated using the operational characteristics approach for comparing deterministic global optimization algorithms on the class of 100 widely used test functions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graf, Peter; Dykes, Katherine; Scott, George
The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.
Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand
2007-07-01
This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.
Wind Farm Turbine Type and Placement Optimization
NASA Astrophysics Data System (ADS)
Graf, Peter; Dykes, Katherine; Scott, George; Fields, Jason; Lunacek, Monte; Quick, Julian; Rethore, Pierre-Elouan
2016-09-01
The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. This document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.
Wind farm turbine type and placement optimization
Graf, Peter; Dykes, Katherine; Scott, George; ...
2016-10-03
The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.
A feasible DY conjugate gradient method for linear equality constraints
NASA Astrophysics Data System (ADS)
LI, Can
2017-09-01
In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.
Fat water decomposition using globally optimal surface estimation (GOOSE) algorithm.
Cui, Chen; Wu, Xiaodong; Newell, John D; Jacob, Mathews
2015-03-01
This article focuses on developing a novel noniterative fat water decomposition algorithm more robust to fat water swaps and related ambiguities. Field map estimation is reformulated as a constrained surface estimation problem to exploit the spatial smoothness of the field, thus minimizing the ambiguities in the recovery. Specifically, the differences in the field map-induced frequency shift between adjacent voxels are constrained to be in a finite range. The discretization of the above problem yields a graph optimization scheme, where each node of the graph is only connected with few other nodes. Thanks to the low graph connectivity, the problem is solved efficiently using a noniterative graph cut algorithm. The global minimum of the constrained optimization problem is guaranteed. The performance of the algorithm is compared with that of state-of-the-art schemes. Quantitative comparisons are also made against reference data. The proposed algorithm is observed to yield more robust fat water estimates with fewer fat water swaps and better quantitative results than other state-of-the-art algorithms in a range of challenging applications. The proposed algorithm is capable of considerably reducing the swaps in challenging fat water decomposition problems. The experiments demonstrate the benefit of using explicit smoothness constraints in field map estimation and solving the problem using a globally convergent graph-cut optimization algorithm. © 2014 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
Constrained minimization of smooth functions using a genetic algorithm
NASA Technical Reports Server (NTRS)
Moerder, Daniel D.; Pamadi, Bandu N.
1994-01-01
The use of genetic algorithms for minimization of differentiable functions that are subject to differentiable constraints is considered. A technique is demonstrated for converting the solution of the necessary conditions for a constrained minimum into an unconstrained function minimization. This technique is extended as a global constrained optimization algorithm. The theory is applied to calculating minimum-fuel ascent control settings for an energy state model of an aerospace plane.
Liu, Qingshan; Wang, Jun
2011-04-01
This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.
Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David
2016-01-01
Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.
Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen
2018-01-01
The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635
A globally convergent LCL method for nonlinear optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedlander, M. P.; Saunders, M. A.; Mathematics and Computer Science
2005-01-01
For optimization problems with nonlinear constraints, linearly constrained Lagrangian (LCL) methods solve a sequence of subproblems of the form 'minimize an augmented Lagrangian function subject to linearized constraints.' Such methods converge rapidly near a solution but may not be reliable from arbitrary starting points. Nevertheless, the well-known software package MINOS has proved effective on many large problems. Its success motivates us to derive a related LCL algorithm that possesses three important properties: it is globally convergent, the subproblem constraints are always feasible, and the subproblems may be solved inexactly. The new algorithm has been implemented in Matlab, with an optionmore » to use either MINOS or SNOPT (Fortran codes) to solve the linearly constrained subproblems. Only first derivatives are required. We present numerical results on a subset of the COPS, HS, and CUTE test problems, which include many large examples. The results demonstrate the robustness and efficiency of the stabilized LCL procedure.« less
Pareto-Optimal Estimates of California Precipitation Change
NASA Astrophysics Data System (ADS)
Langenbrunner, Baird; Neelin, J. David
2017-12-01
In seeking constraints on global climate model projections under global warming, one commonly finds that different subsets of models perform well under different objective functions, and these trade-offs are difficult to weigh. Here a multiobjective approach is applied to a large set of subensembles generated from the Climate Model Intercomparison Project phase 5 ensemble. We use observations and reanalyses to constrain tropical Pacific sea surface temperatures, upper level zonal winds in the midlatitude Pacific, and California precipitation. An evolutionary algorithm identifies the set of Pareto-optimal subensembles across these three measures, and these subensembles are used to constrain end-of-century California wet season precipitation change. This methodology narrows the range of projections throughout California, increasing confidence in estimates of positive mean precipitation change. Finally, we show how this technique complements and generalizes emergent constraint approaches for restricting uncertainty in end-of-century projections within multimodel ensembles using multiple criteria for observational constraints.
Global Optimization of Interplanetary Trajectories in the Presence of Realistic Mission Contraints
NASA Technical Reports Server (NTRS)
Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren
2015-01-01
Interplanetary missions are often subject to difficult constraints, like solar phase angle upon arrival at the destination, velocity at arrival, and altitudes for flybys. Preliminary design of such missions is often conducted by solving the unconstrained problem and then filtering away solutions which do not naturally satisfy the constraints. However this can bias the search into non-advantageous regions of the solution space, so it can be better to conduct preliminary design with the full set of constraints imposed. In this work two stochastic global search methods are developed which are well suited to the constrained global interplanetary trajectory optimization problem.
Castillo, Edward; Castillo, Richard; Fuentes, David; Guerrero, Thomas
2014-01-01
Purpose: Block matching is a well-known strategy for estimating corresponding voxel locations between a pair of images according to an image similarity metric. Though robust to issues such as image noise and large magnitude voxel displacements, the estimated point matches are not guaranteed to be spatially accurate. However, the underlying optimization problem solved by the block matching procedure is similar in structure to the class of optimization problem associated with B-spline based registration methods. By exploiting this relationship, the authors derive a numerical method for computing a global minimizer to a constrained B-spline registration problem that incorporates the robustness of block matching with the global smoothness properties inherent to B-spline parameterization. Methods: The method reformulates the traditional B-spline registration problem as a basis pursuit problem describing the minimal l1-perturbation to block match pairs required to produce a B-spline fitting error within a given tolerance. The sparsity pattern of the optimal perturbation then defines a voxel point cloud subset on which the B-spline fit is a global minimizer to a constrained variant of the B-spline registration problem. As opposed to traditional B-spline algorithms, the optimization step involving the actual image data is addressed by block matching. Results: The performance of the method is measured in terms of spatial accuracy using ten inhale/exhale thoracic CT image pairs (available for download at www.dir-lab.com) obtained from the COPDgene dataset and corresponding sets of expert-determined landmark point pairs. The results of the validation procedure demonstrate that the method can achieve a high spatial accuracy on a significantly complex image set. Conclusions: The proposed methodology is demonstrated to achieve a high spatial accuracy and is generalizable in that in can employ any displacement field parameterization described as a least squares fit to block match generated estimates. Thus, the framework allows for a wide range of image similarity block match metric and physical modeling combinations. PMID:24694135
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behboodi, Sahand; Chassin, David P.; Djilali, Ned
This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less
Behboodi, Sahand; Chassin, David P.; Djilali, Ned; ...
2016-12-23
This study describes a new approach for solving the multi-area electricity resource allocation problem when considering both intermittent renewables and demand response. The method determines the hourly inter-area export/import set that maximizes the interconnection (global) surplus satisfying transmission, generation and load constraints. The optimal inter-area transfer set effectively makes the electricity price uniform over the interconnection apart from constrained areas, which overall increases the consumer surplus more than it decreases the producer surplus. The method is computationally efficient and suitable for use in simulations that depend on optimal scheduling models. The method is demonstrated on a system that represents Northmore » America Western Interconnection for the planning year of 2024. Simulation results indicate that effective use of interties reduces the system operation cost substantially. Excluding demand response, both the unconstrained and the constrained scheduling solutions decrease the global production cost (and equivalently increase the global economic surplus) by 12.30B and 10.67B per year, respectively, when compared to the standalone case in which each control area relies only on its local supply resources. This cost saving is equal to 25% and 22% of the annual production cost. Including 5% demand response, the constrained solution decreases the annual production cost by 10.70B, while increases the annual surplus by 9.32B in comparison to the standalone case.« less
A trust region-based approach to optimize triple response systems
NASA Astrophysics Data System (ADS)
Fan, Shu-Kai S.; Fan, Chihhao; Huang, Chia-Fen
2014-05-01
This article presents a new computing procedure for the global optimization of the triple response system (TRS) where the response functions are non-convex quadratics and the input factors satisfy a radial constrained region of interest. The TRS arising from response surface modelling can be approximated using a nonlinear mathematical program that considers one primary objective function and two secondary constraint functions. An optimization algorithm named the triple response surface algorithm (TRSALG) is proposed to determine the global optimum for the non-degenerate TRS. In TRSALG, the Lagrange multipliers of the secondary functions are determined using the Hooke-Jeeves search method and the Lagrange multiplier of the radial constraint is located using the trust region method within the global optimality space. The proposed algorithm is illustrated in terms of three examples appearing in the quality-control literature. The results of TRSALG compared to a gradient-based method are also presented.
OpenMDAO: Framework for Flexible Multidisciplinary Design, Analysis and Optimization Methods
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Gray, Justin S.
2012-01-01
The OpenMDAO project is underway at NASA to develop a framework which simplifies the implementation of state-of-the-art tools and methods for multidisciplinary design, analysis and optimization. Foremost, OpenMDAO has been designed to handle variable problem formulations, encourage reconfigurability, and promote model reuse. This work demonstrates the concept of iteration hierarchies in OpenMDAO to achieve a flexible environment for supporting advanced optimization methods which include adaptive sampling and surrogate modeling techniques. In this effort, two efficient global optimization methods were applied to solve a constrained, single-objective and constrained, multiobjective version of a joint aircraft/engine sizing problem. The aircraft model, NASA's nextgeneration advanced single-aisle civil transport, is being studied as part of the Subsonic Fixed Wing project to help meet simultaneous program goals for reduced fuel burn, emissions, and noise. This analysis serves as a realistic test problem to demonstrate the flexibility and reconfigurability offered by OpenMDAO.
An effective parameter optimization with radiation balance constraints in the CAM5
NASA Astrophysics Data System (ADS)
Wu, L.; Zhang, T.; Qin, Y.; Lin, Y.; Xue, W.; Zhang, M.
2017-12-01
Uncertain parameters in physical parameterizations of General Circulation Models (GCMs) greatly impact model performance. Traditional parameter tuning methods are mostly unconstrained optimization, leading to the simulation results with optimal parameters may not meet the conditions that models have to keep. In this study, the radiation balance constraint is taken as an example, which is involved in the automatic parameter optimization procedure. The Lagrangian multiplier method is used to solve this optimization problem with constrains. In our experiment, we use CAM5 atmosphere model under 5-yr AMIP simulation with prescribed seasonal climatology of SST and sea ice. We consider the synthesized metrics using global means of radiation, precipitation, relative humidity, and temperature as the goal of optimization, and simultaneously consider the conditions that FLUT and FSNTOA should satisfy as constraints. The global average of the output variables FLUT and FSNTOA are set to be approximately equal to 240 Wm-2 in CAM5. Experiment results show that the synthesized metrics is 13.6% better than the control run. At the same time, both FLUT and FSNTOA are close to the constrained conditions. The FLUT condition is well satisfied, which is obviously better than the average annual FLUT obtained with the default parameters. The FSNTOA has a slight deviation from the observed value, but the relative error is less than 7.7‰.
Global Optimization of N-Maneuver, High-Thrust Trajectories Using Direct Multiple Shooting
NASA Technical Reports Server (NTRS)
Vavrina, Matthew A.; Englander, Jacob A.; Ellison, Donald H.
2016-01-01
The performance of impulsive, gravity-assist trajectories often improves with the inclusion of one or more maneuvers between flybys. However, grid-based scans over the entire design space can become computationally intractable for even one deep-space maneuver, and few global search routines are capable of an arbitrary number of maneuvers. To address this difficulty a trajectory transcription allowing for any number of maneuvers is developed within a multi-objective, global optimization framework for constrained, multiple gravity-assist trajectories. The formulation exploits a robust shooting scheme and analytic derivatives for computational efficiency. The approach is applied to several complex, interplanetary problems, achieving notable performance without a user-supplied initial guess.
NASA Astrophysics Data System (ADS)
Li, Wei; Ciais, Philippe; Wang, Yilong; Yin, Yi; Peng, Shushi; Zhu, Zaichun; Bastos, Ana; Yue, Chao; Ballantyne, Ashley P.; Broquet, Grégoire; Canadell, Josep G.; Cescatti, Alessandro; Chen, Chi; Cooper, Leila; Friedlingstein, Pierre; Le Quéré, Corinne; Myneni, Ranga B.; Piao, Shilong
2018-01-01
To assess global carbon cycle variability, we decompose the net land carbon sink into the sum of gross primary productivity (GPP), terrestrial ecosystem respiration (TER), and fire emissions and apply a Bayesian framework to constrain these fluxes between 1980 and 2014. The constrained GPP and TER fluxes show an increasing trend of only half of the prior trend simulated by models. From the optimization, we infer that TER increased in parallel with GPP from 1980 to 1990, but then stalled during the cooler periods, in 1990-1994 coincident with the Pinatubo eruption, and during the recent warming hiatus period. After each of these TER stalling periods, TER is found to increase faster than GPP, explaining a relative reduction of the net land sink. These results shed light on decadal variations of GPP and TER and suggest that they exhibit different responses to temperature anomalies over the last 35 years.
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
NASA Technical Reports Server (NTRS)
Chen, Guanrong
1991-01-01
An optimal trajectory planning problem for a single-link, flexible joint manipulator is studied. A global feedback-linearization is first applied to formulate the nonlinear inequality-constrained optimization problem in a suitable way. Then, an exact and explicit structural formula for the optimal solution of the problem is derived and the solution is shown to be unique. It turns out that the optimal trajectory planning and control can be done off-line, so that the proposed method is applicable to both theoretical analysis and real time tele-robotics control engineering.
NASA Astrophysics Data System (ADS)
Zaouche, Abdelouahib; Dayoub, Iyad; Rouvaen, Jean Michel; Tatkeu, Charles
2008-12-01
We propose a global convergence baud-spaced blind equalization method in this paper. This method is based on the application of both generalized pattern optimization and channel surfing reinitialization. The potentially used unimodal cost function relies on higher- order statistics, and its optimization is achieved using a pattern search algorithm. Since the convergence to the global minimum is not unconditionally warranted, we make use of channel surfing reinitialization (CSR) strategy to find the right global minimum. The proposed algorithm is analyzed, and simulation results using a severe frequency selective propagation channel are given. Detailed comparisons with constant modulus algorithm (CMA) are highlighted. The proposed algorithm performances are evaluated in terms of intersymbol interference, normalized received signal constellations, and root mean square error vector magnitude. In case of nonconstant modulus input signals, our algorithm outperforms significantly CMA algorithm with full channel surfing reinitialization strategy. However, comparable performances are obtained for constant modulus signals.
Improving Robot Locomotion Through Learning Methods for Expensive Black-Box Systems
2013-11-01
development of a class of “gradient free” optimization techniques; these include local approaches, such as a Nelder- Mead simplex search (c.f. [73]), and global...1Note that this simple method differs from the Nelder Mead constrained nonlinear optimization method [73]. 39 the Non-dominated Sorting Genetic Algorithm...Kober, and Jan Peters. Model-free inverse reinforcement learning. In International Conference on Artificial Intelligence and Statistics, 2011. [12] George
Evolutionary optimization methods for accelerator design
NASA Astrophysics Data System (ADS)
Poklonskiy, Alexey A.
Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.
Robust Path Planning and Feedback Design Under Stochastic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars
2008-01-01
Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.
Hybrid Motion Planning with Multiple Destinations
NASA Technical Reports Server (NTRS)
Clouse, Jeffery
1998-01-01
In our initial proposal, we laid plans for developing a hybrid motion planning system that combines the concepts of visibility-based motion planning, artificial potential field based motion planning, evolutionary constrained optimization, and reinforcement learning. Our goal was, and still is, to produce a hybrid motion planning system that outperforms the best traditional motion planning systems on problems with dynamic environments. The proposed hybrid system will be in two parts the first is a global motion planning system and the second is a local motion planning system. The global system will take global information about the environment, such as the placement of the obstacles and goals, and produce feasible paths through those obstacles. We envision a system that combines the evolutionary-based optimization and visibility-based motion planning to achieve this end.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
On the utilization of engineering knowledge in design optimization
NASA Technical Reports Server (NTRS)
Papalambros, P.
1984-01-01
Some current research work conducted at the University of Michigan is described to illustrate efforts for incorporating knowledge in optimization in a nontraditional way. The incorporation of available knowledge in a logic structure is examined in two circumstances. The first examines the possibility of introducing global design information in a local active set strategy implemented during the iterations of projection-type algorithms for nonlinearly constrained problems. The technique used algorithms for nonlinearly constrained problems. The technique used combines global and local monotinicity analysis of the objective and constraint functions. The second examines a knowledge-based program which aids the user to create condigurations that are most desirable from the manufacturing assembly viewpoint. The data bank used is the classification scheme suggested by Boothroyd. The important aspect of this program is that it is an aid for synthesis intended for use in the design concept phase in a way similar to the so-called idea-triggers in creativity-enhancement techniques like brain-storming. The idea generation, however, is not random but it is driven by the goal of achieving the best acceptable configuration.
NASA Astrophysics Data System (ADS)
Alimohammadi, Shahrouz; Cavaglieri, Daniele; Beyhaghi, Pooriya; Bewley, Thomas R.
2016-11-01
This work applies a recently developed Derivative-free optimization algorithm to derive a new mixed implicit-explicit (IMEX) time integration scheme for Computational Fluid Dynamics (CFD) simulations. This algorithm allows imposing a specified order of accuracy for the time integration and other important stability properties in the form of nonlinear constraints within the optimization problem. In this procedure, the coefficients of the IMEX scheme should satisfy a set of constraints simultaneously. Therefore, the optimization process, at each iteration, estimates the location of the optimal coefficients using a set of global surrogates, for both the objective and constraint functions, as well as a model of the uncertainty function of these surrogates based on the concept of Delaunay triangulation. This procedure has been proven to converge to the global minimum of the constrained optimization problem provided the constraints and objective functions are twice differentiable. As a result, a new third-order, low-storage IMEX Runge-Kutta time integration scheme is obtained with remarkably fast convergence. Numerical tests are then performed leveraging the turbulent channel flow simulations to validate the theoretical order of accuracy and stability properties of the new scheme.
Implementation and verification of global optimization benchmark problems
NASA Astrophysics Data System (ADS)
Posypkin, Mikhail; Usov, Alexander
2017-12-01
The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.
NASA Astrophysics Data System (ADS)
Jones, S. I.; Uritsky, V. M.; Davila, J. M.
2017-12-01
In absence of reliable coronal magnetic field measurements, solar physicists have worked for several decades to develop techniques for extrapolating photospheric magnetic field measurements into the solar corona and/or heliosphere. The products of these efforts tend to be very sensitive to variation in the photospheric measurements, such that the uncertainty in the photospheric measurements introduces significant uncertainty into the coronal and heliospheric models needed to predict such things as solar wind speed, IMF polarity at Earth, and CME propagation. Ultimately, the reason for the sensitivity of the model to the boundary conditions is that the model is trying to extact a great deal of information from a relatively small amout of data. We have published in recent years about a new method we are developing to use morphological information gleaned from coronagraph images to constrain models of the global coronal magnetic field. In our approach, we treat the photospheric measurements as approximations and use an optimization algorithm to iteratively find a global coronal model that best matches both the photospheric measurements and quasi-linear features observed in polarization brightness coronagraph images. Here we will summarize the approach we have developed and present recent progress in optimizing PFSS models based on GONG magnetograms and MLSO K-Cor images.
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
Medial-based deformable models in nonconvex shape-spaces for medical image segmentation.
McIntosh, Chris; Hamarneh, Ghassan
2012-01-01
We explore the application of genetic algorithms (GA) to deformable models through the proposition of a novel method for medical image segmentation that combines GA with nonconvex, localized, medial-based shape statistics. We replace the more typical gradient descent optimizer used in deformable models with GA, and the convex, implicit, global shape statistics with nonconvex, explicit, localized ones. Specifically, we propose GA to reduce typical deformable model weaknesses pertaining to model initialization, pose estimation and local minima, through the simultaneous evolution of a large number of models. Furthermore, we constrain the evolution, and thus reduce the size of the search-space, by using statistically-based deformable models whose deformations are intuitive (stretch, bulge, bend) and are driven in terms of localized principal modes of variation, instead of modes of variation across the entire shape that often fail to capture localized shape changes. Although GA are not guaranteed to achieve the global optima, our method compares favorably to the prevalent optimization techniques, convex/nonconvex gradient-based optimizers and to globally optimal graph-theoretic combinatorial optimization techniques, when applied to the task of corpus callosum segmentation in 50 mid-sagittal brain magnetic resonance images.
Distance Metric Learning via Iterated Support Vector Machines.
Zuo, Wangmeng; Wang, Faqiang; Zhang, David; Lin, Liang; Huang, Yuchi; Meng, Deyu; Zhang, Lei
2017-07-11
Distance metric learning aims to learn from the given training data a valid distance metric, with which the similarity between data samples can be more effectively evaluated for classification. Metric learning is often formulated as a convex or nonconvex optimization problem, while most existing methods are based on customized optimizers and become inefficient for large scale problems. In this paper, we formulate metric learning as a kernel classification problem with the positive semi-definite constraint, and solve it by iterated training of support vector machines (SVMs). The new formulation is easy to implement and efficient in training with the off-the-shelf SVM solvers. Two novel metric learning models, namely Positive-semidefinite Constrained Metric Learning (PCML) and Nonnegative-coefficient Constrained Metric Learning (NCML), are developed. Both PCML and NCML can guarantee the global optimality of their solutions. Experiments are conducted on general classification, face verification and person re-identification to evaluate our methods. Compared with the state-of-the-art approaches, our methods can achieve comparable classification accuracy and are efficient in training.
A frozen Gaussian approximation-based multi-level particle swarm optimization for seismic inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jinglai, E-mail: jinglaili@sjtu.edu.cn; Lin, Guang, E-mail: lin491@purdue.edu; Computational Sciences and Mathematics Division, Pacific Northwest National Laboratory, Richland, WA 99352
2015-09-01
In this paper, we propose a frozen Gaussian approximation (FGA)-based multi-level particle swarm optimization (MLPSO) method for seismic inversion of high-frequency wave data. The method addresses two challenges in it: First, the optimization problem is highly non-convex, which makes hard for gradient-based methods to reach global minima. This is tackled by MLPSO which can escape from undesired local minima. Second, the character of high-frequency of seismic waves requires a large number of grid points in direct computational methods, and thus renders an extremely high computational demand on the simulation of each sample in MLPSO. We overcome this difficulty by threemore » steps: First, we use FGA to compute high-frequency wave propagation based on asymptotic analysis on phase plane; Then we design a constrained full waveform inversion problem to prevent the optimization search getting into regions of velocity where FGA is not accurate; Last, we solve the constrained optimization problem by MLPSO that employs FGA solvers with different fidelity. The performance of the proposed method is demonstrated by a two-dimensional full-waveform inversion example of the smoothed Marmousi model.« less
NASA Astrophysics Data System (ADS)
Peylin, P. P.; Bacour, C.; MacBean, N.; Maignan, F.; Bastrikov, V.; Chevallier, F.
2017-12-01
Predicting the fate of carbon stocks and their sensitivity to climate change and land use/management strongly relies on our ability to accurately model net and gross carbon fluxes. However, simulated carbon and water fluxes remain subject to large uncertainties, partly because of unknown or poorly calibrated parameters. Over the past ten years, the carbon cycle data assimilation system at the Laboratoire des Sciences du Climat et de l'Environnement has investigated the benefit of assimilating multiple carbon cycle data streams into the ORCHIDEE LSM, the land surface component of the Institut Pierre Simon Laplace Earth System Model. These datasets have included FLUXNET eddy covariance data (net CO2 flux and latent heat flux) to constrain hourly to seasonal time-scale carbon cycle processes, remote sensing of the vegetation activity (MODIS NDVI) to constrain the leaf phenology, biomass data to constrain "slow" (yearly to decadal) processes of carbon allocation, and atmospheric CO2 concentrations to provide overall large scale constraints on the land carbon sink. Furthermore, we have investigated technical issues related to multiple data stream assimilation and choice of optimization algorithm. This has provided a wide-ranging perspective on the challenges we face in constraining model parameters and thus better quantifying, and reducing, model uncertainty in projections of the future global carbon sink. We review our past studies in terms of the impact of the optimization on key characteristics of the carbon cycle, e.g. the partition of the northern latitudes vs tropical land carbon sink, and compare to the classic atmospheric flux inversion approach. Throughout, we discuss our work in context of the abovementioned challenges, and propose solutions for the community going forward, including the potential of new observations such as atmospheric COS concentrations and satellite-derived Solar Induced Fluorescence to constrain the gross carbon fluxes of the ORCHIDEE model.
Do Vascular Networks Branch Optimally or Randomly across Spatial Scales?
Newberry, Mitchell G.; Savage, Van M.
2016-01-01
Modern models that derive allometric relationships between metabolic rate and body mass are based on the architectural design of the cardiovascular system and presume sibling vessels are symmetric in terms of radius, length, flow rate, and pressure. Here, we study the cardiovascular structure of the human head and torso and of a mouse lung based on three-dimensional images processed via our software Angicart. In contrast to modern allometric theories, we find systematic patterns of asymmetry in vascular branching, potentially explaining previously documented mismatches between predictions (power-law or concave curvature) and observed empirical data (convex curvature) for the allometric scaling of metabolic rate. To examine why these systematic asymmetries in vascular branching might arise, we construct a mathematical framework to derive predictions based on local, junction-level optimality principles that have been proposed to be favored in the course of natural selection and development. The two most commonly used principles are material-cost optimizations (construction materials or blood volume) and optimization of efficient flow via minimization of power loss. We show that material-cost optimization solutions match with distributions for asymmetric branching across the whole network but do not match well for individual junctions. Consequently, we also explore random branching that is constrained at scales that range from local (junction-level) to global (whole network). We find that material-cost optimizations are the strongest predictor of vascular branching in the human head and torso, whereas locally or intermediately constrained random branching is comparable to material-cost optimizations for the mouse lung. These differences could be attributable to developmentally-programmed local branching for larger vessels and constrained random branching for smaller vessels. PMID:27902691
An efficient and practical approach to obtain a better optimum solution for structural optimization
NASA Astrophysics Data System (ADS)
Chen, Ting-Yu; Huang, Jyun-Hao
2013-08-01
For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.
Globally optimal superconducting magnets part I: minimum stored energy (MSE) current density map.
Tieng, Quang M; Vegh, Viktor; Brereton, Ian M
2009-01-01
An optimal current density map is crucial in magnet design to provide the initial values within search spaces in an optimization process for determining the final coil arrangement of the magnet. A strategy for obtaining globally optimal current density maps for the purpose of designing magnets with coaxial cylindrical coils in which the stored energy is minimized within a constrained domain is outlined. The current density maps obtained utilising the proposed method suggests that peak current densities occur around the perimeter of the magnet domain, where the adjacent peaks have alternating current directions for the most compact designs. As the dimensions of the domain are increased, the current density maps yield traditional magnet designs of positive current alone. These unique current density maps are obtained by minimizing the stored magnetic energy cost function and therefore suggest magnet coil designs of minimal system energy. Current density maps are provided for a number of different domain arrangements to illustrate the flexibility of the method and the quality of the achievable designs.
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
A Model-Data Fusion Approach for Constraining Modeled GPP at Global Scales Using GOME2 SIF Data
NASA Astrophysics Data System (ADS)
MacBean, N.; Maignan, F.; Lewis, P.; Guanter, L.; Koehler, P.; Bacour, C.; Peylin, P.; Gomez-Dans, J.; Disney, M.; Chevallier, F.
2015-12-01
Predicting the fate of the ecosystem carbon, C, stocks and their sensitivity to climate change relies heavily on our ability to accurately model the gross carbon fluxes, i.e. photosynthesis and respiration. However, there are large differences in the Gross Primary Productivity (GPP) simulated by different land surface models (LSMs), not only in terms of mean value, but also in terms of phase and amplitude when compared to independent data-based estimates. This strongly limits our ability to provide accurate predictions of carbon-climate feedbacks. One possible source of this uncertainty is from inaccurate parameter values resulting from incomplete model calibration. Solar Induced Fluorescence (SIF) has been shown to have a linear relationship with GPP at the typical spatio-temporal scales used in LSMs (Guanter et al., 2011). New satellite-derived SIF datasets have the potential to constrain LSM parameters related to C uptake at global scales due to their coverage. Here we use SIF data derived from the GOME2 instrument (Köhler et al., 2014) to optimize parameters related to photosynthesis and leaf phenology of the ORCHIDEE LSM, as well as the linear relationship between SIF and GPP. We use a multi-site approach that combines many model grid cells covering a wide spatial distribution within the same optimization (e.g. Kuppel et al., 2014). The parameters are constrained per Plant Functional type as the linear relationship described above varies depending on vegetation structural properties. The relative skill of the optimization is compared to a case where only satellite-derived vegetation index data are used to constrain the model, and to a case where both data streams are used. We evaluate the results using an independent data-driven estimate derived from FLUXNET data (Jung et al., 2011) and with a new atmospheric tracer, Carbonyl sulphide (OCS) following the approach of Launois et al. (ACPD, in review). We show that the optimization reduces the strong positive bias of the ORCHIDEE model and increases the correlation compared to independent estimates. Differences in spatial patterns and gradients between simulated GPP and observed SIF remain largely unchanged however, suggesting that the underlying representation of vegetation type and/or structure and functioning in the model requires further investigation.
A Global Analysis of Light and Charge Yields in Liquid Xenon
Lenardo, Brian; Kazkaz, Kareem; Manalaysay, Aaron; ...
2015-11-04
Here, we present an updated model of light and charge yields from nuclear recoils in liquid xenon with a simultaneously constrained parameter set. A global analysis is performed using measurements of electron and photon yields compiled from all available historical data, as well as measurements of the ratio of the two. These data sweep over energies from keV and external applied electric fields from V/cm. The model is constrained by constructing global cost functions and using a simulated annealing algorithm and a Markov Chain Monte Carlo approach to optimize and find confidence intervals on all free parameters in the model.more » This analysis contrasts with previous work in that we do not unnecessarily exclude datasets nor impose artificially conservative assumptions, do not use spline functions, and reduce the number of parameters used in NEST v 0.98. Here, we report our results and the calculated best-fit charge and light yields. These quantities are crucial to understanding the response of liquid xenon detectors in the energy regime important for rare event searches such as the direct detection of dark matter particles.« less
Classification-Assisted Memetic Algorithms for Equality-Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Handoko, Stephanus Daniel; Kwoh, Chee Keong; Ong, Yew Soon
Regressions has successfully been incorporated into memetic algorithm (MA) to build surrogate models for the objective or constraint landscape of optimization problems. This helps to alleviate the needs for expensive fitness function evaluations by performing local refinements on the approximated landscape. Classifications can alternatively be used to assist MA on the choice of individuals that would experience refinements. Support-vector-assisted MA were recently proposed to alleviate needs for function evaluations in the inequality-constrained optimization problems by distinguishing regions of feasible solutions from those of the infeasible ones based on some past solutions such that search efforts can be focussed on some potential regions only. For problems having equality constraints, however, the feasible space would obviously be extremely small. It is thus extremely difficult for the global search component of the MA to produce feasible solutions. Hence, the classification of feasible and infeasible space would become ineffective. In this paper, a novel strategy to overcome such limitation is proposed, particularly for problems having one and only one equality constraint. The raw constraint value of an individual, instead of its feasibility class, is utilized in this work.
NASA Astrophysics Data System (ADS)
Yang, E. G.; Kort, E. A.; Ware, J.; Ye, X.; Lauvaux, T.; Wu, D.; Lin, J. C.; Oda, T.
2017-12-01
Anthropogenic carbon dioxide (CO2) emissions are greatly perturbing the Earth's carbon cycle. Rising emissions from the developing world are increasing uncertainties in global CO2 emissions. With the rapid urbanization of developing regions, methods of constraining urban CO2 emissions in these areas can address critical uncertainties in the global carbon budget. In this study, we work toward constraining urban CO2 emissions in the Middle East by comparing top-down observations and bottom-up simulations of total column CO2 (XCO2) in four cities (Riyadh, Cairo, Baghdad, and Doha), both separately and in aggregate. This comparison involves quantifying the relationship for all available data in the period of September 2014 until March 2016 between observations of XCO2 from the Orbiting Carbon Observatory-2 (OCO-2) satellite and simulations of XCO2 using the Stochastic Time-Inverted Lagrangian Transport (STILT) model coupled with Global Data Assimilation System (GDAS) reanalysis products and multiple CO2 emissions inventories. We discuss the extent to which our observation/model framework can distinguish between the different emissions representations and determine optimized emissions estimates for this domain. We also highlight the implications of our comparisons on the fidelity of the bottom-up inventories used, and how these implications may inform the use of OCO-2 data for urban regions around the world.
Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying
2017-08-01
Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Stavrakou, T.; Muller, J.; de Smedt, I.; van Roozendael, M.; Vrekoussis, M.; Wittrock, F.; Richter, A.; Burrows, J.
2008-12-01
Formaldehyde (HCHO) and glyoxal (CHOCHO) are carbonyls formed in the oxidation of volatile organic compounds (VOCs) emitted by plants, anthropogenic activities, and biomass burning. They are also directly emitted by fires. Although this primary production represents only a small part of the global source for both species, yet it can be locally important during intense fire events. Simultaneous observations of formaldehyde and glyoxal retrieved from the SCIAMACHY satellite instrument in 2005 and provided by the BIRA/IASB and the Bremen group, respectively, are compared with the corresponding columns simulated with the IMAGESv2 global CTM. The chemical mechanism has been optimized with respect to HCHO and CHOCHO production from pyrogenically emitted NMVOCs, based on the Master Chemical Mechanism (MCM) and on an explicit profile for biomass burning emissions. Gas-to-particle conversion of glyoxal in clouds and in aqueous aerosols is considered in the model. In this study we provide top-down estimates for fire emissions of HCHO and CHOCHO precursors by performing a two- compound inversion of emissions using the adjoint of the IMAGES model. The pyrogenic fluxes are optimized at the model resolution. The two-compound inversion offers the advantage that the information gained from measurements of one species constrains the sources of both compounds, due to the existence of common precursors. In a first inversion, only the burnt biomass amounts are optimized. In subsequent simulations, the emission factors for key individual NMVOC compounds are also varied.
NASA Astrophysics Data System (ADS)
Cai, X.; Zhang, X.; Zhu, T.
2014-12-01
Global food security is constrained by local and regional land and water availability, as well as other agricultural input limitations and inappropriate national and global regulations. In a theoretical context, this study assumes that optimal water and land uses in local food production to maximize food security and social welfare at the global level can be driven by global trade. It follows the context of "virtual resources trade", i.e., utilizing international trade of agricultural commodities to reduce dependency on local resources, and achieves land and water savings in the world. An optimization model based on the partial equilibrium of agriculture is developed for the analysis, including local commodity production and land and water resources constraints, demand by country, and global food market. Through the model, the marginal values (MVs) of social welfare for water and land at the level of so-called food production units (i.e., sub-basins with similar agricultural production conditions) are derived and mapped in the world. In this personation, we will introduce the model structure, explain the meaning of MVs at the local level and their distribution around the world, and discuss the policy implications for global communities to enhance global food security. In particular, we will examine the economic values of water and land under different world targets of food security (e.g., number of malnourished population or children in a future year). In addition, we will also discuss the opportunities on data to improve such global modeling exercises.
Smooth Constrained Heuristic Optimization of a Combinatorial Chemical Space
2015-05-01
ARL-TR-7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...7294•MAY 2015 US Army Research Laboratory Smooth ConstrainedHeuristic Optimization of a Combinatorial Chemical Space by Berend Christopher...
NASA Technical Reports Server (NTRS)
Lewis, Robert Michael; Torczon, Virginia
1998-01-01
We give a pattern search adaptation of an augmented Lagrangian method due to Conn, Gould, and Toint. The algorithm proceeds by successive bound constrained minimization of an augmented Lagrangian. In the pattern search adaptation we solve this subproblem approximately using a bound constrained pattern search method. The stopping criterion proposed by Conn, Gould, and Toint for the solution of this subproblem requires explicit knowledge of derivatives. Such information is presumed absent in pattern search methods; however, we show how we can replace this with a stopping criterion based on the pattern size in a way that preserves the convergence properties of the original algorithm. In this way we proceed by successive, inexact, bound constrained minimization without knowing exactly how inexact the minimization is. So far as we know, this is the first provably convergent direct search method for general nonlinear programming.
Automated parameter tuning applied to sea ice in a global climate model
NASA Astrophysics Data System (ADS)
Roach, Lettie A.; Tett, Simon F. B.; Mineter, Michael J.; Yamazaki, Kuniko; Rae, Cameron D.
2018-01-01
This study investigates the hypothesis that a significant portion of spread in climate model projections of sea ice is due to poorly-constrained model parameters. New automated methods for optimization are applied to historical sea ice in a global coupled climate model (HadCM3) in order to calculate the combination of parameters required to reduce the difference between simulation and observations to within the range of model noise. The optimized parameters result in a simulated sea-ice time series which is more consistent with Arctic observations throughout the satellite record (1980-present), particularly in the September minimum, than the standard configuration of HadCM3. Divergence from observed Antarctic trends and mean regional sea ice distribution reflects broader structural uncertainty in the climate model. We also find that the optimized parameters do not cause adverse effects on the model climatology. This simple approach provides evidence for the contribution of parameter uncertainty to spread in sea ice extent trends and could be customized to investigate uncertainties in other climate variables.
NASA Astrophysics Data System (ADS)
Pasquier, B.; Holzer, M.; Frants, M.
2016-02-01
We construct a data-constrained mechanistic inverse model of the ocean's coupled phosphorus and iron cycles. The nutrient cycling is embedded in a data-assimilated steady global circulation. Biological nutrient uptake is parameterized in terms of nutrient, light, and temperature limitations on growth for two classes of phytoplankton that are not transported explicitly. A matrix formulation of the discretized nutrient tracer equations allows for efficient numerical solutions, which facilitates the objective optimization of the key biogeochemical parameters. The optimization minimizes the misfit between the modelled and observed nutrient fields of the current climate. We systematically assess the nonlinear response of the biological pump to changes in the aeolian iron supply for a variety of scenarios. Specifically, Green-function techniques are employed to quantify in detail the pathways and timescales with which those perturbations are propagated throughout the world oceans, determining the global teleconnections that mediate the response of the global ocean ecosystem. We confirm previous findings from idealized studies that increased iron fertilization decreases biological production in the subtropical gyres and we quantify the counterintuitive and asymmetric response of global productivity to increases and decreases in the aeolian iron supply.
NASA Astrophysics Data System (ADS)
Zhang, Shupeng; Yi, Xue; Zheng, Xiaogu; Chen, Zhuoqi; Dan, Bo; Zhang, Xuanze
2014-11-01
In this paper, a global carbon assimilation system (GCAS) is developed for optimizing the global land surface carbon flux at 1° resolution using multiple ecosystem models. In GCAS, three ecosystem models, Boreal Ecosystem Productivity Simulator, Carnegie-Ames-Stanford Approach, and Community Atmosphere Biosphere Land Exchange, produce the prior fluxes, and an atmospheric transport model, Model for OZone And Related chemical Tracers, is used to calculate atmospheric CO2 concentrations resulting from these prior fluxes. A local ensemble Kalman filter is developed to assimilate atmospheric CO2 data observed at 92 stations to optimize the carbon flux for six land regions, and the Bayesian model averaging method is implemented in GCAS to calculate the weighted average of the optimized fluxes based on individual ecosystem models. The weights for the models are found according to the closeness of their forecasted CO2 concentration to observation. Results of this study show that the model weights vary in time and space, allowing for an optimum utilization of different strengths of different ecosystem models. It is also demonstrated that spatial localization is an effective technique to avoid spurious optimization results for regions that are not well constrained by the atmospheric data. Based on the multimodel optimized flux from GCAS, we found that the average global terrestrial carbon sink over the 2002-2008 period is 2.97 ± 1.1 PgC yr-1, and the sinks are 0.88 ± 0.52, 0.27 ± 0.33, 0.67 ± 0.39, 0.90 ± 0.68, 0.21 ± 0.31, and 0.04 ± 0.08 PgC yr-1 for the North America, South America, Africa, Eurasia, Tropical Asia, and Australia, respectively. This multimodel GCAS can be used to improve global carbon cycle estimation.
Global optimization framework for solar building design
NASA Astrophysics Data System (ADS)
Silva, N.; Alves, N.; Pascoal-Faria, P.
2017-07-01
The generative modeling paradigm is a shift from static models to flexible models. It describes a modeling process using functions, methods and operators. The result is an algorithmic description of the construction process. Each evaluation of such an algorithm creates a model instance, which depends on its input parameters (width, height, volume, roof angle, orientation, location). These values are normally chosen according to aesthetic aspects and style. In this study, the model's parameters are automatically generated according to an objective function. A generative model can be optimized according to its parameters, in this way, the best solution for a constrained problem is determined. Besides the establishment of an overall framework design, this work consists on the identification of different building shapes and their main parameters, the creation of an algorithmic description for these main shapes and the formulation of the objective function, respecting a building's energy consumption (solar energy, heating and insulation). Additionally, the conception of an optimization pipeline, combining an energy calculation tool with a geometric scripting engine is presented. The methods developed leads to an automated and optimized 3D shape generation for the projected building (based on the desired conditions and according to specific constrains). The approach proposed will help in the construction of real buildings that account for less energy consumption and for a more sustainable world.
Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices.
Marin, Leandro; Pawlowski, Marcin Piotr; Jara, Antonio
2015-08-28
The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol.
Precision constraints on the top-quark effective field theory at future lepton colliders
NASA Astrophysics Data System (ADS)
Durieux, G.
We examine the constraints that future lepton colliders would impose on the effective field theory describing modifications of top-quark interactions beyond the standard model, through measurements of the $e^+e^-\\to bW^+\\:\\bar bW^-$ process. Statistically optimal observables are exploited to constrain simultaneously and efficiently all relevant operators. Their constraining power is sufficient for quadratic effective-field-theory contributions to have negligible impact on limits which are therefore basis independent. This is contrasted with the measurements of cross sections and forward-backward asymmetries. An overall measure of constraints strength, the global determinant parameter, is used to determine which run parameters impose the strongest restriction on the multidimensional effective-field-theory parameter space.
NASA Astrophysics Data System (ADS)
Lesmana, E.; Chaerani, D.; Khansa, H. N.
2018-03-01
Energy-Saving Generation Dispatch (ESGD) is a scheme made by Chinese Government in attempt to minimize CO2 emission produced by power plant. This scheme is made related to global warming which is primarily caused by too much CO2 in earth’s atmosphere, and while the need of electricity is something absolute, the power plants producing it are mostly thermal-power plant which produced many CO2. Many approach to fulfill this scheme has been made, one of them came through Minimum Cost Flow in which resulted in a Quadratically Constrained Quadratic Programming (QCQP) form. In this paper, ESGD problem with Minimum Cost Flow in QCQP form will be solved using Lagrange’s Multiplier Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Belcher, AH; Wiersma, R
Purpose: In radiation therapy optimization the constraints can be either hard constraints which must be satisfied or soft constraints which are included but do not need to be satisfied exactly. Currently the voxel dose constraints are viewed as soft constraints and included as a part of the objective function and approximated as an unconstrained problem. However in some treatment planning cases the constraints should be specified as hard constraints and solved by constrained optimization. The goal of this work is to present a computation efficiency graph form alternating direction method of multipliers (ADMM) algorithm for constrained quadratic treatment planning optimizationmore » and compare it with several commonly used algorithms/toolbox. Method: ADMM can be viewed as an attempt to blend the benefits of dual decomposition and augmented Lagrangian methods for constrained optimization. Various proximal operators were first constructed as applicable to quadratic IMRT constrained optimization and the problem was formulated in a graph form of ADMM. A pre-iteration operation for the projection of a point to a graph was also proposed to further accelerate the computation. Result: The graph form ADMM algorithm was tested by the Common Optimization for Radiation Therapy (CORT) dataset including TG119, prostate, liver, and head & neck cases. Both unconstrained and constrained optimization problems were formulated for comparison purposes. All optimizations were solved by LBFGS, IPOPT, Matlab built-in toolbox, CVX (implementing SeDuMi) and Mosek solvers. For unconstrained optimization, it was found that LBFGS performs the best, and it was 3–5 times faster than graph form ADMM. However, for constrained optimization, graph form ADMM was 8 – 100 times faster than the other solvers. Conclusion: A graph form ADMM can be applied to constrained quadratic IMRT optimization. It is more computationally efficient than several other commercial and noncommercial optimizers and it also used significantly less computer memory.« less
Zhang, Bo; Duan, Haibin
2017-01-01
Three-dimension path planning of uninhabited combat aerial vehicle (UCAV) is a complicated optimal problem, which mainly focused on optimizing the flight route considering the different types of constrains under complex combating environment. A novel predator-prey pigeon-inspired optimization (PPPIO) is proposed to solve the UCAV three-dimension path planning problem in dynamic environment. Pigeon-inspired optimization (PIO) is a new bio-inspired optimization algorithm. In this algorithm, map and compass operator model and landmark operator model are used to search the best result of a function. The prey-predator concept is adopted to improve global best properties and enhance the convergence speed. The characteristics of the optimal path are presented in the form of a cost function. The comparative simulation results show that our proposed PPPIO algorithm is more efficient than the basic PIO, particle swarm optimization (PSO), and different evolution (DE) in solving UCAV three-dimensional path planning problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Kuo -Ling; Mehrotra, Sanjay
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
Global Optimization of Emergency Evacuation Assignments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, Lee; Yuan, Fang; Chin, Shih-Miao
2006-01-01
Conventional emergency evacuation plans often assign evacuees to fixed routes or destinations based mainly on geographic proximity. Such approaches can be inefficient if the roads are congested, blocked, or otherwise dangerous because of the emergency. By not constraining evacuees to prespecified destinations, a one-destination evacuation approach provides flexibility in the optimization process. We present a framework for the simultaneous optimization of evacuation-traffic distribution and assignment. Based on the one-destination evacuation concept, we can obtain the optimal destination and route assignment by solving a one-destination traffic-assignment problem on a modified network representation. In a county-wide, large-scale evacuation case study, the one-destinationmore » model yields substantial improvement over the conventional approach, with the overall evacuation time reduced by more than 60 percent. More importantly, emergency planners can easily implement this framework by instructing evacuees to go to destinations that the one-destination optimization process selects.« less
The design of multirate digital control systems
NASA Technical Reports Server (NTRS)
Berg, M. C.
1986-01-01
The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.
Zhao, Meng; Ding, Baocang
2015-03-01
This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Optimized ECC Implementation for Secure Communication between Heterogeneous IoT Devices
Marin, Leandro; Piotr Pawlowski, Marcin; Jara, Antonio
2015-01-01
The Internet of Things is integrating information systems, places, users and billions of constrained devices into one global network. This network requires secure and private means of communications. The building blocks of the Internet of Things are devices manufactured by various producers and are designed to fulfil different needs. There would be no common hardware platform that could be applied in every scenario. In such a heterogeneous environment, there is a strong need for the optimization of interoperable security. We present optimized elliptic curve Cryptography algorithms that address the security issues in the heterogeneous IoT networks. We have combined cryptographic algorithms for the NXP/Jennic 5148- and MSP430-based IoT devices and used them to created novel key negotiation protocol. PMID:26343677
Using Coronal Hole Maps to Constrain MHD Models
NASA Astrophysics Data System (ADS)
Caplan, Ronald M.; Downs, Cooper; Linker, Jon A.; Mikic, Zoran
2017-08-01
In this presentation, we explore the use of coronal hole maps (CHMs) as a constraint for thermodynamic MHD models of the solar corona. Using our EUV2CHM software suite (predsci.com/chd), we construct CHMs from SDO/AIA 193Å and STEREO-A/EUVI 195Å images for multiple Carrington rotations leading up to the August 21st, 2017 total solar eclipse. We then contruct synoptic CHMs from synthetic EUV images generated from global thermodynamic MHD simulations of the corona for each rotation. Comparisons of apparent coronal hole boundaries and estimates of the net open flux are used to benchmark and constrain our MHD model leading up to the eclipse. Specifically, the comparisons are used to find optimal parameterizations of our wave turbulence dissipation (WTD) coronal heating model.
Balancing building and maintenance costs in growing transport networks
NASA Astrophysics Data System (ADS)
Bottinelli, Arianna; Louf, Rémi; Gherardi, Marco
2017-09-01
The costs associated to the length of links impose unavoidable constraints to the growth of natural and artificial transport networks. When future network developments cannot be predicted, the costs of building and maintaining connections cannot be minimized simultaneously, requiring competing optimization mechanisms. Here, we study a one-parameter nonequilibrium model driven by an optimization functional, defined as the convex combination of building cost and maintenance cost. By varying the coefficient of the combination, the model interpolates between global and local length minimization, i.e., between minimum spanning trees and a local version known as dynamical minimum spanning trees. We show that cost balance within this ensemble of dynamical networks is a sufficient ingredient for the emergence of tradeoffs between the network's total length and transport efficiency, and of optimal strategies of construction. At the transition between two qualitatively different regimes, the dynamics builds up power-law distributed waiting times between global rearrangements, indicating a point of nonoptimality. Finally, we use our model as a framework to analyze empirical ant trail networks, showing its relevance as a null model for cost-constrained network formation.
Plessow, Philipp N
2018-02-13
This work explores how constrained linear combinations of bond lengths can be used to optimize transition states in periodic structures. Scanning of constrained coordinates is a standard approach for molecular codes with localized basis functions, where a full set of internal coordinates is used for optimization. Common plane wave-codes for periodic boundary conditions almost exlusively rely on Cartesian coordinates. An implementation of constrained linear combinations of bond lengths with Cartesian coordinates is described. Along with an optimization of the value of the constrained coordinate toward the transition states, this allows transition optimization within a single calculation. The approach is suitable for transition states that can be well described in terms of broken and formed bonds. In particular, the implementation is shown to be effective and efficient in the optimization of transition states in zeolite-catalyzed reactions, which have high relevance in industrial processes.
Minimal complexity control law synthesis
NASA Technical Reports Server (NTRS)
Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.
1989-01-01
A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.
Eilbeigi, Elnaz; Setarehdan, Seyed Kamaledin
2018-05-26
Brain-computer interfaces (BCIs) are a promising tool in neurorehabilitation. The intention to perform a motor action can be detected from brain signals and used to control robotic devices. Most previous studies have focused on the starting of movements from a resting state, while in daily life activities, motions occur continuously and the neural activities correlated to the evolving movements are yet to be investigated. First we investigate the existence of neural correlates of intention to replace an object on the table during a holding phase. Next, we present a new method to extract the movement-related cortical potentials (MRCP) from a single-trial EEG. A novel method called Global optimal constrained ICA (GocICA) is proposed to overcome the limitations of cICA which is implemented based on Particle Swarm Optimization (PSO) and Charged System Search (CSS) techniques. GocICA is then utilized for decoding the intention to grasp and lift and intention to replace movements where the results were compared. It was found that GocICA significantly improves the intention detection performance. Best results in offline detection were obtained with CSS-cICA for both kinds of intentions. Furthermore, pseudo-online decoding showed that GocICA was able to predict both intentions before the onset of related movements with the highest probability. Decoding of the next movement intention during current movement is possible, which can be used to create more natural neuroprostheses. The results demonstrate that GocICA is a promising new algorithm for single-trial MRCP detection which can be used for detecting other types of ERPs such as P300. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping
2012-04-01
Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOS-Chem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.
NASA Technical Reports Server (NTRS)
Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping
2012-01-01
Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOSChem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.
Olugbara, Oludayo
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369
Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah
2014-01-01
This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.
Estimation of Atmospheric Methane Surface Fluxes Using a Global 3-D Chemical Transport Model
NASA Astrophysics Data System (ADS)
Chen, Y.; Prinn, R.
2003-12-01
Accurate determination of atmospheric methane surface fluxes is an important and challenging problem in global biogeochemical cycles. We use inverse modeling to estimate annual, seasonal, and interannual CH4 fluxes between 1996 and 2001. The fluxes include 7 time-varying seasonal (3 wetland, rice, and 3 biomass burning) and 3 steady aseasonal (animals/waste, coal, and gas) global processes. To simulate atmospheric methane, we use the 3-D chemical transport model MATCH driven by NCEP reanalyzed observed winds at a resolution of T42 ( ˜2.8° x 2.8° ) in the horizontal and 28 levels (1000 - 3 mb) in the vertical. By combining existing datasets of individual processes, we construct a reference emissions field that represents our prior guess of the total CH4 surface flux. For the methane sink, we use a prescribed, annually-repeating OH field scaled to fit methyl chloroform observations. MATCH is used to produce both the reference run from the reference emissions, and the time-dependent sensitivities that relate individual emission processes to observations. The observational data include CH4 time-series from ˜15 high-frequency (in-situ) and ˜50 low-frequency (flask) observing sites. Most of the high-frequency data, at a time resolution of 40-60 minutes, have not previously been used in global scale inversions. In the inversion, the high-frequency data generally have greater weight than the weekly flask data because they better define the observational monthly means. The Kalman Filter is used as the optimal inversion technique to solve for emissions between 1996-2001. At each step in the inversion, new monthly observations are utilized and new emissions estimates are produced. The optimized emissions represent deviations from the reference emissions that lead to a better fit to the observations. The seasonal processes are optimized for each month, and contain the methane seasonality and interannual variability. The aseasonal processes, which are less variable, are solved as constant emissions over the entire time period. The Kalman Filter also produces emission uncertainties which quantify the ability of the observing network to constrain different processes. The sensitivity of the inversion to different observing sites and model sampling strategies is also tested. In general, the inversion reduces coal and gas emissions, and increases rice and biomass burning emissions relative to the reference case. Increases in both tropical and northern wetland emissions are found to have dominated the strong atmospheric methane increase in 1998. Northern wetlands are the best constrained processes, while tropical regions are poorly constrained and will require additional observations in the future for significant uncertainty reduction. The results of this study also suggest that interannual varying transport like NCEP and high-frequency measurements should be used when solving for methane emissions at monthly time resolution. Better estimates of global OH fluctuations are also necessary to fully describe the interannual behavior of methane observations.
Material Distribution Optimization for the Shell Aircraft Composite Structure
NASA Astrophysics Data System (ADS)
Shevtsov, S.; Zhilyaev, I.; Oganesyan, P.; Axenov, V.
2016-09-01
One of the main goal in aircraft structures designing isweight decreasing and stiffness increasing. Composite structures recently became popular in aircraft because of their mechanical properties and wide range of optimization possibilities.Weight distribution and lay-up are keys to creating lightweight stiff strictures. In this paperwe discuss optimization of specific structure that undergoes the non-uniform air pressure at the different flight conditions and reduce a level of noise caused by the airflowinduced vibrations at the constrained weight of the part. Initial model was created with CAD tool Siemens NX, finite element analysis and post processing were performed with COMSOL Multiphysicsr and MATLABr. Numerical solutions of the Reynolds averaged Navier-Stokes (RANS) equations supplemented by k-w turbulence model provide the spatial distributions of air pressure applied to the shell surface. At the formulation of optimization problem the global strain energy calculated within the optimized shell was assumed as the objective. Wall thickness has been changed using parametric approach by an initiation of auxiliary sphere with varied radius and coordinates of the center, which were the design variables. To avoid a local stress concentration, wall thickness increment was defined as smooth function on the shell surface dependent of auxiliary sphere position and size. Our study consists of multiple steps: CAD/CAE transformation of the model, determining wind pressure for different flow angles, optimizing wall thickness distribution for specific flow angles, designing a lay-up for optimal material distribution. The studied structure was improved in terms of maximum and average strain energy at the constrained expense ofweight growth. Developed methods and tools can be applied to wide range of shell-like structures made of multilayered quasi-isotropic laminates.
Bayesian Optimization Under Mixed Constraints with A Slack-Variable Augmented Lagrangian
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picheny, Victor; Gramacy, Robert B.; Wild, Stefan M.
An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e.g., unconstrained) problems, which are then usually solved with local solvers. Recently, surrogate-based Bayesian optimization (BO) sub-solvers have been successfully deployed in the AL framework for a more global search in the presence of inequality constraints; however, a drawback was that expected improvement (EI) evaluations relied on Monte Carlo. Here we introduce an alternative slack variable AL, and show that in this formulation the EI may be evaluated with library routines. The slack variables furthermore facilitate equality as well as inequality constraints, and mixtures thereof.more » We show our new slack “ALBO” compares favorably to the original. Its superiority over conventional alternatives is reinforced on several mixed constraint examples.« less
Publications | Grid Modernization | NREL
Photovoltaics: Trajectories and Challenges Cover of Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow publication Efficient Relaxations for Joint Chance Constrained AC Optimal Power Flow
Huang, Kuo -Ling; Mehrotra, Sanjay
2016-11-08
We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Luo, Biao; Liu, Derong; Wu, Huai-Ning
2018-06-01
Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
Using palaeoclimate data to improve models of the Antarctic Ice Sheet
NASA Astrophysics Data System (ADS)
Phipps, Steven; King, Matt; Roberts, Jason; White, Duanne
2017-04-01
Ice sheet models are the most descriptive tools available to simulate the future evolution of the Antarctic Ice Sheet (AIS), including its contribution towards changes in global sea level. However, our knowledge of the dynamics of the coupled ice-ocean-lithosphere system is inevitably limited, in part due to a lack of observations. Furthemore, to build computationally efficient models that can be run for multiple millennia, it is necessary to use simplified descriptions of ice dynamics. Ice sheet modelling is therefore an inherently uncertain exercise. The past evolution of the AIS provides an opportunity to constrain the description of physical processes within ice sheet models and, therefore, to constrain our understanding of the role of the AIS in driving changes in global sea level. We use the Parallel Ice Sheet Model (PISM) to demonstrate how palaeoclimate data can improve our ability to predict the future evolution of the AIS. A 50-member perturbed-physics ensemble is generated, spanning uncertainty in the parameterisations of three key physical processes within the model: (i) the stress balance within the ice sheet, (ii) basal sliding and (iii) calving of ice shelves. A Latin hypercube approach is used to optimally sample the range of uncertainty in parameter values. This perturbed-physics ensemble is used to simulate the evolution of the AIS from the Last Glacial Maximum ( 21,000 years ago) to present. Palaeoclimate records are then used to determine which ensemble members are the most realistic. This allows us to use data on past climates to directly constrain our understanding of the past contribution of the AIS towards changes in global sea level. Critically, it also allows us to determine which ensemble members are likely to generate the most realistic projections of the future evolution of the AIS.
Using paleoclimate data to improve models of the Antarctic Ice Sheet
NASA Astrophysics Data System (ADS)
King, M. A.; Phipps, S. J.; Roberts, J. L.; White, D.
2016-12-01
Ice sheet models are the most descriptive tools available to simulate the future evolution of the Antarctic Ice Sheet (AIS), including its contribution towards changes in global sea level. However, our knowledge of the dynamics of the coupled ice-ocean-lithosphere system is inevitably limited, in part due to a lack of observations. Furthemore, to build computationally efficient models that can be run for multiple millennia, it is necessary to use simplified descriptions of ice dynamics. Ice sheet modeling is therefore an inherently uncertain exercise. The past evolution of the AIS provides an opportunity to constrain the description of physical processes within ice sheet models and, therefore, to constrain our understanding of the role of the AIS in driving changes in global sea level. We use the Parallel Ice Sheet Model (PISM) to demonstrate how paleoclimate data can improve our ability to predict the future evolution of the AIS. A large, perturbed-physics ensemble is generated, spanning uncertainty in the parameterizations of four key physical processes within ice sheet models: ice rheology, ice shelf calving, and the stress balances within ice sheets and ice shelves. A Latin hypercube approach is used to optimally sample the range of uncertainty in parameter values. This perturbed-physics ensemble is used to simulate the evolution of the AIS from the Last Glacial Maximum ( 21,000 years ago) to present. Paleoclimate records are then used to determine which ensemble members are the most realistic. This allows us to use data on past climates to directly constrain our understanding of the past contribution of the AIS towards changes in global sea level. Critically, it also allows us to determine which ensemble members are likely to generate the most realistic projections of the future evolution of the AIS.
Effective Teaching of Economics: A Constrained Optimization Problem?
ERIC Educational Resources Information Center
Hultberg, Patrik T.; Calonge, David Santandreu
2017-01-01
One of the fundamental tenets of economics is that decisions are often the result of optimization problems subject to resource constraints. Consumers optimize utility, subject to constraints imposed by prices and income. As economics faculty, instructors attempt to maximize student learning while being constrained by their own and students'…
Variable-Metric Algorithm For Constrained Optimization
NASA Technical Reports Server (NTRS)
Frick, James D.
1989-01-01
Variable Metric Algorithm for Constrained Optimization (VMACO) is nonlinear computer program developed to calculate least value of function of n variables subject to general constraints, both equality and inequality. First set of constraints equality and remaining constraints inequalities. Program utilizes iterative method in seeking optimal solution. Written in ANSI Standard FORTRAN 77.
NASA Astrophysics Data System (ADS)
Shevtsov, S.; Zhilyaev, I.; Oganesyan, P.; Axenov, V.
2017-01-01
The glass/carbon fiber composites are widely used in the design of various aircraft and rotorcraft components such as fairings and cowlings, which have predominantly a shell-like geometry and are made of quasi-isotropic laminates. The main requirements to such the composite parts are the specified mechanical stiffness to withstand the non-uniform air pressure at the different flight conditions and reduce a level of noise caused by the airflow-induced vibrations at the constrained weight of the part. The main objective of present study is the optimization of wall thickness and lay-up of composite shell-like cowling. The present approach assumes conversion of the CAD model of the cowling surface to finite element (FE) representation, then its wind tunnel testing simulation at the different orientation of airflow to find the most stressed mode of flight. Numerical solutions of the Reynolds averaged Navier-Stokes (RANS) equations supplemented by k-w turbulence model provide the spatial distributions of air pressure applied to the shell surface. At the formulation of optimization problem the global strain energy calculated within the optimized shell was assumed as the objective. A wall thickness of the shell had to change over its surface to minimize the objective at the constrained weight. We used a parameterization of the problem that assumes an initiation of auxiliary sphere with varied radius and coordinates of the center, which were the design variables. Curve that formed by the intersection of the shell with sphere defined boundary of area, which should be reinforced by local thickening the shell wall. To eliminate a local stress concentration this increment was defined as the smooth function defined on the shell surface. As a result of structural optimization we obtained the thickness of shell's wall distribution, which then was used to design the draping and lay-up of composite prepreg layers. The global strain energy in the optimized cowling was reduced in2.5 times at the weight growth up to 15%, whereas the eigenfrequencies at the 6 first natural vibration modes have been increased by 5-15%. The present approach and developed programming tools that demonstrated a good efficiency and stability at the acceptable computational costs can be used to optimize a wide range of shell-like structures made of quasi-isotropic laminates.
A multi-frequency receiver function inversion approach for crustal velocity structure
NASA Astrophysics Data System (ADS)
Li, Xuelei; Li, Zhiwei; Hao, Tianyao; Wang, Sheng; Xing, Jian
2017-05-01
In order to constrain the crustal velocity structures better, we developed a new nonlinear inversion approach based on multi-frequency receiver function waveforms. With the global optimizing algorithm of Differential Evolution (DE), low-frequency receiver function waveforms can primarily constrain large-scale velocity structures, while high-frequency receiver function waveforms show the advantages in recovering small-scale velocity structures. Based on the synthetic tests with multi-frequency receiver function waveforms, the proposed approach can constrain both long- and short-wavelength characteristics of the crustal velocity structures simultaneously. Inversions with real data are also conducted for the seismic stations of KMNB in southeast China and HYB in Indian continent, where crustal structures have been well studied by former researchers. Comparisons of inverted velocity models from previous and our studies suggest good consistency, but better waveform fitness with fewer model parameters are achieved by our proposed approach. Comprehensive tests with synthetic and real data suggest that the proposed inversion approach with multi-frequency receiver function is effective and robust in inverting the crustal velocity structures.
Fracture characterization from near-offset VSP inversion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horne, S.; MacBeth, C.; Queen, J.
1997-01-01
A global optimization method incorporating a ray-tracing scheme is used to invert observations of shear-wave splitting from two near-offset VSPs recorded at the Conoco Borehole Test Facility, Kay County, Oklahoma. Inversion results suggest that the seismic anisotropy is due to a non-vertical fracture system. This interpretation is constrained by the VSP acquisition geometry for which two sources are employed along near diametrically opposite azimuths about the well heads. A correlation is noted between the time-delay variations between the fast and slow split shear waves and the sandstone formations.
Osten, Friedrich Burkhard von der; Kirley, Michael; Miller, Tim
2017-05-23
The sustainable use of common pool resources has become a significant global challenge. It is now widely accepted that specific mechanisms such as community-based management strategies, institutional responses such as resource privatization, information availability and emergent social norms can be used to constrain individual 'harvesting' to socially optimal levels. However, there is a paucity of research focused specifically on aligning profitability and sustainability goals. In this paper, an integrated mathematical model of a common pool resource game is developed to explore the nexus between the underlying costs and benefits of harvesting decisions and the sustainable level of a shared, dynamic resource. We derive optimal harvesting efforts analytically and then use numerical simulations to show that individuals in a group can learn to make harvesting decisions that lead to the globally optimal levels. Individual agents make their decision based on signals received and a trade-off between economic and ecological sustainability. When the balance is weighted towards profitability, acceptable economic and social outcomes emerge. However, if individual agents are solely driven by profit, the shared resource is depleted in the long run - sustainability is possible despite some greed, but too much will lead to over-exploitation.
NASA Astrophysics Data System (ADS)
Qu, Z.; Henze, D. K.; Wang, J.; Xu, X.; Wang, Y.
2017-12-01
Quantifying emissions trends of nitrogen oxides (NOx) and sulfur dioxide (SO2) is important for improving understanding of air pollution and the effectiveness of emission control strategies. We estimate long-term (2005-2016) global (2° x 2.5° resolution) and regional (North America and East Asia at 0.5° x 0.667° resolution) NOx emissions using a recently developed hybrid (mass-balance / 4D-Var) method with GEOS-Chem. NASA standard product and DOMINO retrievals of NO2 column are both used to constrain emissions; comparison of these results provides insight into regions where trends are most robust with respect to retrieval uncertainties, and highlights regions where seemingly significant trends are retrieval-specific. To incorporate chemical interactions among species, we extend our hybrid method to assimilate NO2 and SO2 observations and optimize NOx and SO2 emissions simultaneously. Due to chemical interactions, inclusion of SO2 observations leads to 30% grid-scale differences in posterior NOx emissions compared to those constrained only by NO2 observations. When assimilating and optimizing both species in pseudo observation tests, the sum of the normalized mean squared error (compared to the true emissions) of NOx and SO2 posterior emissions are 54-63% smaller than when observing/constraining a single species. NOx and SO2 emissions are also correlated through the amount of fuel combustion. To incorporate this correlation into the inversion, we optimize seven sector-specific emission scaling factors, including industry, energy, residential, aviation, transportation, shipping and agriculture. We compare posterior emissions from inversions optimizing only species' emissions, only sector-based emissions, and both species' and sector-based emissions. In situ measurements of NOx and SO2 are applied to evaluate the performance of these inversions. The impacts of the inversion on PM2.5 and O3 concentrations and premature deaths are also evaluated.
Exploring constrained quantum control landscapes
NASA Astrophysics Data System (ADS)
Moore, Katharine W.; Rabitz, Herschel
2012-10-01
The broad success of optimally controlling quantum systems with external fields has been attributed to the favorable topology of the underlying control landscape, where the landscape is the physical observable as a function of the controls. The control landscape can be shown to contain no suboptimal trapping extrema upon satisfaction of reasonable physical assumptions, but this topological analysis does not hold when significant constraints are placed on the control resources. This work employs simulations to explore the topology and features of the control landscape for pure-state population transfer with a constrained class of control fields. The fields are parameterized in terms of a set of uniformly spaced spectral frequencies, with the associated phases acting as the controls. This restricted family of fields provides a simple illustration for assessing the impact of constraints upon seeking optimal control. Optimization results reveal that the minimum number of phase controls necessary to assure a high yield in the target state has a special dependence on the number of accessible energy levels in the quantum system, revealed from an analysis of the first- and second-order variation of the yield with respect to the controls. When an insufficient number of controls and/or a weak control fluence are employed, trapping extrema and saddle points are observed on the landscape. When the control resources are sufficiently flexible, solutions producing the globally maximal yield are found to form connected "level sets" of continuously variable control fields that preserve the yield. These optimal yield level sets are found to shrink to isolated points on the top of the landscape as the control field fluence is decreased, and further reduction of the fluence turns these points into suboptimal trapping extrema on the landscape. Although constrained control fields can come in many forms beyond the cases explored here, the behavior found in this paper is illustrative of the impacts that constraints can introduce.
NASA Astrophysics Data System (ADS)
Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III
2015-12-01
Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.
NASA Astrophysics Data System (ADS)
Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin
2017-10-01
Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.
Constrained Multiobjective Biogeography Optimization Algorithm
Mo, Hongwei; Xu, Zhidan; Xu, Lifang; Wu, Zhou; Ma, Haiping
2014-01-01
Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA) is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA. PMID:25006591
On advanced configuration enhance adaptive system optimization
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Wang, Helong; Guo, Chunjie; Chen, Hongliang; Zhou, Liwei
2017-10-01
For aim to find an effective method to structure to enhance these adaptive system with some complex function and look forward to establish an universally applicable solution in prototype and optimization. As the most attractive component in adaptive system, wave front corrector is constrained by some conventional technique and components, such as polarization dependence and narrow working waveband. Advanced configuration based on a polarized beam split can optimized energy splitting method used to overcome these problems effective. With the global algorithm, the bandwidth has been amplified by more than five times as compared with that of traditional ones. Simulation results show that the system can meet the application requirements in MTF and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration, Results show their effectiveness.
Quadruped Robot Locomotion using a Global Optimization Stochastic Algorithm
NASA Astrophysics Data System (ADS)
Oliveira, Miguel; Santos, Cristina; Costa, Lino; Ferreira, Manuel
2011-09-01
The problem of tuning nonlinear dynamical systems parameters, such that the attained results are considered good ones, is a relevant one. This article describes the development of a gait optimization system that allows a fast but stable robot quadruped crawl gait. We combine bio-inspired Central Patterns Generators (CPGs) and Genetic Algorithms (GA). CPGs are modelled as autonomous differential equations, that generate the necessar y limb movement to perform the required walking gait. The GA finds parameterizations of the CPGs parameters which attain good gaits in terms of speed, vibration and stability. Moreover, two constraint handling techniques based on tournament selection and repairing mechanism are embedded in the GA to solve the proposed constrained optimization problem and make the search more efficient. The experimental results, performed on a simulated Aibo robot, demonstrate that our approach allows low vibration with a high velocity and wide stability margin for a quadruped slow crawl gait.
Evaluation of Diagnostic CO2 Flux and Transport Modeling in NU-WRF and GEOS-5
NASA Astrophysics Data System (ADS)
Kawa, S. R.; Collatz, G. J.; Tao, Z.; Wang, J. S.; Ott, L. E.; Liu, Y.; Andrews, A. E.; Sweeney, C.
2015-12-01
We report on recent diagnostic (constrained by observations) model simulations of atmospheric CO2 flux and transport using a newly developed facility in the NASA Unified-Weather Research and Forecast (NU-WRF) model. The results are compared to CO2 data (ground-based, airborne, and GOSAT) and to corresponding simulations from a global model that uses meteorology from the NASA GEOS-5 Modern Era Retrospective analysis for Research and Applications (MERRA). The objective of these intercomparisons is to assess the relative strengths and weaknesses of the respective models in pursuit of an overall carbon process improvement at both regional and global scales. Our guiding hypothesis is that the finer resolution and improved land surface representation in NU-WRF will lead to better comparisons with CO2 data than those using global MERRA, which will, in turn, inform process model development in global prognostic models. Initial intercomparison results, however, have generally been mixed: NU-WRF is better at some sites and times but not uniformly. We are examining the model transport processes in detail to diagnose differences in the CO2 behavior. These comparisons are done in the context of a long history of simulations from the Parameterized Chemistry and Transport Model, based on GEOS-5 meteorology and Carnegie Ames-Stanford Approach-Global Fire Emissions Database (CASA-GFED) fluxes, that capture much of the CO2 variation from synoptic to seasonal to global scales. We have run the NU-WRF model using unconstrained, internally generated meteorology within the North American domain, and with meteorological 'nudging' from Global Forecast System and North American Regional Reanalysis (NARR) in an effort to optimize the CO2 simulations. Output results constrained by NARR show the best comparisons to data. Discrepancies, of course, may arise either from flux or transport errors and compensating errors are possible. Resolving their interplay is also important to using the data in inverse models. Recent analysis is focused on planetary boundary depth, which can be significantly different between MERRA and NU-WRF, along with subgrid transport differences. Characterization of transport differences between the models will allow us to better constrain the CO2 fluxes, which is the major objective of this work.
Pareto-optimal estimates that constrain mean California precipitation change
NASA Astrophysics Data System (ADS)
Langenbrunner, B.; Neelin, J. D.
2017-12-01
Global climate model (GCM) projections of greenhouse gas-induced precipitation change can exhibit notable uncertainty at the regional scale, particularly in regions where the mean change is small compared to internal variability. This is especially true for California, which is located in a transition zone between robust precipitation increases to the north and decreases to the south, and where GCMs from the Climate Model Intercomparison Project phase 5 (CMIP5) archive show no consensus on mean change (in either magnitude or sign) across the central and southern parts of the state. With the goal of constraining this uncertainty, we apply a multiobjective approach to a large set of subensembles (subsets of models from the full CMIP5 ensemble). These constraints are based on subensemble performance in three fields important to California precipitation: tropical Pacific sea surface temperatures, upper-level zonal winds in the midlatitude Pacific, and precipitation over the state. An evolutionary algorithm is used to sort through and identify the set of Pareto-optimal subensembles across these three measures in the historical climatology, and we use this information to constrain end-of-century California wet season precipitation change. This technique narrows the range of projections throughout the state and increases confidence in estimates of positive mean change. Furthermore, these methods complement and generalize emergent constraint approaches that aim to restrict uncertainty in end-of-century projections, and they have applications to even broader aspects of uncertainty quantification, including parameter sensitivity and model calibration.
NASA Astrophysics Data System (ADS)
Wu, Xiaolin; Rong, Yue
2015-12-01
The quality-of-service (QoS) criteria (measured in terms of the minimum capacity requirement in this paper) are very important to practical indoor power line communication (PLC) applications as they greatly affect the user experience. With a two-way multicarrier relay configuration, in this paper we investigate the joint terminals and relay power optimization for the indoor broadband PLC environment, where the relay node works in the amplify-and-forward (AF) mode. As the QoS-constrained power allocation problem is highly non-convex, the globally optimal solution is computationally intractable to obtain. To overcome this challenge, we propose an alternating optimization (AO) method to decompose this problem into three convex/quasi-convex sub-problems. Simulation results demonstrate the fast convergence of the proposed algorithm under practical PLC channel conditions. Compared with the conventional bidirectional direct transmission (BDT) system, the relay-assisted two-way information exchange (R2WX) scheme can meet the same QoS requirement with less total power consumption.
NASA Astrophysics Data System (ADS)
Guo, Weian; Li, Wuzhao; Zhang, Qun; Wang, Lei; Wu, Qidi; Ren, Hongliang
2014-11-01
In evolutionary algorithms, elites are crucial to maintain good features in solutions. However, too many elites can make the evolutionary process stagnate and cannot enhance the performance. This article employs particle swarm optimization (PSO) and biogeography-based optimization (BBO) to propose a hybrid algorithm termed biogeography-based particle swarm optimization (BPSO) which could make a large number of elites effective in searching optima. In this algorithm, the whole population is split into several subgroups; BBO is employed to search within each subgroup and PSO for the global search. Since not all the population is used in PSO, this structure overcomes the premature convergence in the original PSO. Time complexity analysis shows that the novel algorithm does not increase the time consumption. Fourteen numerical benchmarks and four engineering problems with constraints are used to test the BPSO. To better deal with constraints, a fuzzy strategy for the number of elites is investigated. The simulation results validate the feasibility and effectiveness of the proposed algorithm.
Constrained growth flips the direction of optimal phenological responses among annual plants.
Lindh, Magnus; Johansson, Jacob; Bolmgren, Kjell; Lundström, Niklas L P; Brännström, Åke; Jonzén, Niclas
2016-03-01
Phenological changes among plants due to climate change are well documented, but often hard to interpret. In order to assess the adaptive value of observed changes, we study how annual plants with and without growth constraints should optimize their flowering time when productivity and season length changes. We consider growth constraints that depend on the plant's vegetative mass: self-shading, costs for nonphotosynthetic structural tissue and sibling competition. We derive the optimal flowering time from a dynamic energy allocation model using optimal control theory. We prove that an immediate switch (bang-bang control) from vegetative to reproductive growth is optimal with constrained growth and constant mortality. Increasing mean productivity, while keeping season length constant and growth unconstrained, delayed the optimal flowering time. When growth was constrained and productivity was relatively high, the optimal flowering time advanced instead. When the growth season was extended equally at both ends, the optimal flowering time was advanced under constrained growth and delayed under unconstrained growth. Our results suggests that growth constraints are key factors to consider when interpreting phenological flowering responses. It can help to explain phenological patterns along productivity gradients, and links empirical observations made on calendar scales with life-history theory. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Chance-Constrained Guidance With Non-Convex Constraints
NASA Technical Reports Server (NTRS)
Ono, Masahiro
2011-01-01
Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.
COPS: Large-scale nonlinearly constrained optimization problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bondarenko, A.S.; Bortz, D.M.; More, J.J.
2000-02-10
The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.
Constraint Optimization Problem For The Cutting Of A Cobalt Chrome Refractory Material
NASA Astrophysics Data System (ADS)
Lebaal, Nadhir; Schlegel, Daniel; Folea, Milena
2011-05-01
This paper shows a complete approach to solve a given problem, from the experimentation to the optimization of different cutting parameters. In response to an industrial problem of slotting FSX 414, a Cobalt-based refractory material, we have implemented a design of experiment to determine the most influent parameters on the tool life, the surface roughness and the cutting forces. After theses trials, an optimization approach has been implemented to find the lowest manufacturing cost while respecting the roughness constraints and cutting force limitation constraints. The optimization approach is based on the Response Surface Method (RSM) using the Sequential Quadratic programming algorithm (SQP) for a constrained problem. To avoid a local optimum and to obtain an accurate solution at low cost, an efficient strategy, which allows improving the RSM accuracy in the vicinity of the global optimum, is presented. With these models and these trials, we could apply and compare our optimization methods in order to get the lowest cost for the best quality, i.e. a satisfying surface roughness and limited cutting forces.
NASA Astrophysics Data System (ADS)
Wells, Kelley C.; Millet, Dylan B.; Bousserez, Nicolas; Henze, Daven K.; Griffis, Timothy J.; Chaliyakunnel, Sreelekha; Dlugokencky, Edward J.; Saikawa, Eri; Xiang, Gao; Prinn, Ronald G.; O'Doherty, Simon; Young, Dickon; Weiss, Ray F.; Dutton, Geoff S.; Elkins, James W.; Krummel, Paul B.; Langenfelds, Ray; Steele, L. Paul
2018-01-01
We present top-down constraints on global monthly N2O emissions for 2011 from a multi-inversion approach and an ensemble of surface observations. The inversions employ the GEOS-Chem adjoint and an array of aggregation strategies to test how well current observations can constrain the spatial distribution of global N2O emissions. The strategies include (1) a standard 4D-Var inversion at native model resolution (4° × 5°), (2) an inversion for six continental and three ocean regions, and (3) a fast 4D-Var inversion based on a novel dimension reduction technique employing randomized singular value decomposition (SVD). The optimized global flux ranges from 15.9 Tg N yr-1 (SVD-based inversion) to 17.5-17.7 Tg N yr-1 (continental-scale, standard 4D-Var inversions), with the former better capturing the extratropical N2O background measured during the HIAPER Pole-to-Pole Observations (HIPPO) airborne campaigns. We find that the tropics provide a greater contribution to the global N2O flux than is predicted by the prior bottom-up inventories, likely due to underestimated agricultural and oceanic emissions. We infer an overestimate of natural soil emissions in the extratropics and find that predicted emissions are seasonally biased in northern midlatitudes. Here, optimized fluxes exhibit a springtime peak consistent with the timing of spring fertilizer and manure application, soil thawing, and elevated soil moisture. Finally, the inversions reveal a major emission underestimate in the US Corn Belt in the bottom-up inventory used here. We extensively test the impact of initial conditions on the analysis and recommend formally optimizing the initial N2O distribution to avoid biasing the inferred fluxes. We find that the SVD-based approach provides a powerful framework for deriving emission information from N2O observations: by defining the optimal resolution of the solution based on the information content of the inversion, it provides spatial information that is lost when aggregating to political or geographic regions, while also providing more temporal information than a standard 4D-Var inversion.
An Optimization-Based Method for Feature Ranking in Nonlinear Regression Problems.
Bravi, Luca; Piccialli, Veronica; Sciandrone, Marco
2017-04-01
In this paper, we consider the feature ranking problem, where, given a set of training instances, the task is to associate a score with the features in order to assess their relevance. Feature ranking is a very important tool for decision support systems, and may be used as an auxiliary step of feature selection to reduce the high dimensionality of real-world data. We focus on regression problems by assuming that the process underlying the generated data can be approximated by a continuous function (for instance, a feedforward neural network). We formally state the notion of relevance of a feature by introducing a minimum zero-norm inversion problem of a neural network, which is a nonsmooth, constrained optimization problem. We employ a concave approximation of the zero-norm function, and we define a smooth, global optimization problem to be solved in order to assess the relevance of the features. We present the new feature ranking method based on the solution of instances of the global optimization problem depending on the available training data. Computational experiments on both artificial and real data sets are performed, and point out that the proposed feature ranking method is a valid alternative to existing methods in terms of effectiveness. The obtained results also show that the method is costly in terms of CPU time, and this may be a limitation in the solution of large-dimensional problems.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1990-01-01
Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.
Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems
NASA Astrophysics Data System (ADS)
Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao
Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.
Constrained Multi-Level Algorithm for Trajectory Optimization
NASA Astrophysics Data System (ADS)
Adimurthy, V.; Tandon, S. R.; Jessy, Antony; Kumar, C. Ravi
The emphasis on low cost access to space inspired many recent developments in the methodology of trajectory optimization. Ref.1 uses a spectral patching method for optimization, where global orthogonal polynomials are used to describe the dynamical constraints. A two-tier approach of optimization is used in Ref.2 for a missile mid-course trajectory optimization. A hybrid analytical/numerical approach is described in Ref.3, where an initial analytical vacuum solution is taken and gradually atmospheric effects are introduced. Ref.4 emphasizes the fact that the nonlinear constraints which occur in the initial and middle portions of the trajectory behave very nonlinearly with respect the variables making the optimization very difficult to solve in the direct and indirect shooting methods. The problem is further made complex when different phases of the trajectory have different objectives of optimization and also have different path constraints. Such problems can be effectively addressed by multi-level optimization. In the multi-level methods reported so far, optimization is first done in identified sub-level problems, where some coordination variables are kept fixed for global iteration. After all the sub optimizations are completed, higher-level optimization iteration with all the coordination and main variables is done. This is followed by further sub system optimizations with new coordination variables. This process is continued until convergence. In this paper we use a multi-level constrained optimization algorithm which avoids the repeated local sub system optimizations and which also removes the problem of non-linear sensitivity inherent in the single step approaches. Fall-zone constraints, structural load constraints and thermal constraints are considered. In this algorithm, there is only a single multi-level sequence of state and multiplier updates in a framework of an augmented Lagrangian. Han Tapia multiplier updates are used in view of their special role in diagonalised methods, being the only single update with quadratic convergence. For a single level, the diagonalised multiplier method (DMM) is described in Ref.5. The main advantage of the two-level analogue of the DMM approach is that it avoids the inner loop optimizations required in the other methods. The scheme also introduces a gradient change measure to reduce the computational time needed to calculate the gradients. It is demonstrated that the new multi-level scheme leads to a robust procedure to handle the sensitivity of the constraints, and the multiple objectives of different trajectory phases. Ref. 1. Fahroo, F and Ross, M., " A Spectral Patching Method for Direct Trajectory Optimization" The Journal of the Astronautical Sciences, Vol.48, 2000, pp.269-286 Ref. 2. Phililps, C.A. and Drake, J.C., "Trajectory Optimization for a Missile using a Multitier Approach" Journal of Spacecraft and Rockets, Vol.37, 2000, pp.663-669 Ref. 3. Gath, P.F., and Calise, A.J., " Optimization of Launch Vehicle Ascent Trajectories with Path Constraints and Coast Arcs", Journal of Guidance, Control, and Dynamics, Vol. 24, 2001, pp.296-304 Ref. 4. Betts, J.T., " Survey of Numerical Methods for Trajectory Optimization", Journal of Guidance, Control, and Dynamics, Vol.21, 1998, pp. 193-207 Ref. 5. Adimurthy, V., " Launch Vehicle Trajectory Optimization", Acta Astronautica, Vol.15, 1987, pp.845-850.
NASA Astrophysics Data System (ADS)
Shen, Chengcheng; Shi, Honghua; Liu, Yongzhi; Li, Fen; Ding, Dewen
2016-07-01
Marine ecosystem dynamic models (MEDMs) are important tools for the simulation and prediction of marine ecosystems. This article summarizes the methods and strategies used for the improvement and assessment of MEDM skill, and it attempts to establish a technical framework to inspire further ideas concerning MEDM skill improvement. The skill of MEDMs can be improved by parameter optimization (PO), which is an important step in model calibration. An efficient approach to solve the problem of PO constrained by MEDMs is the global treatment of both sensitivity analysis and PO. Model validation is an essential step following PO, which validates the efficiency of model calibration by analyzing and estimating the goodness-of-fit of the optimized model. Additionally, by focusing on the degree of impact of various factors on model skill, model uncertainty analysis can supply model users with a quantitative assessment of model confidence. Research on MEDMs is ongoing; however, improvement in model skill still lacks global treatments and its assessment is not integrated. Thus, the predictive performance of MEDMs is not strong and model uncertainties lack quantitative descriptions, limiting their application. Therefore, a large number of case studies concerning model skill should be performed to promote the development of a scientific and normative technical framework for the improvement of MEDM skill.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2011-12-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Qian, Y.; Yang, B.; Lin, G.; Leung, R.; Zhang, Y.
2012-04-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. The latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic important-sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e., the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
NASA Astrophysics Data System (ADS)
Yang, B.; Qian, Y.; Lin, G.; Leung, R.; Zhang, Y.
2012-03-01
The current tuning process of parameters in global climate models is often performed subjectively or treated as an optimization procedure to minimize model biases based on observations. While the latter approach may provide more plausible values for a set of tunable parameters to approximate the observed climate, the system could be forced to an unrealistic physical state or improper balance of budgets through compensating errors over different regions of the globe. In this study, the Weather Research and Forecasting (WRF) model was used to provide a more flexible framework to investigate a number of issues related uncertainty quantification (UQ) and parameter tuning. The WRF model was constrained by reanalysis of data over the Southern Great Plains (SGP), where abundant observational data from various sources was available for calibration of the input parameters and validation of the model results. Focusing on five key input parameters in the new Kain-Fritsch (KF) convective parameterization scheme used in WRF as an example, the purpose of this study was to explore the utility of high-resolution observations for improving simulations of regional patterns and evaluate the transferability of UQ and parameter tuning across physical processes, spatial scales, and climatic regimes, which have important implications to UQ and parameter tuning in global and regional models. A stochastic importance sampling algorithm, Multiple Very Fast Simulated Annealing (MVFSA) was employed to efficiently sample the input parameters in the KF scheme based on a skill score so that the algorithm progressively moved toward regions of the parameter space that minimize model errors. The results based on the WRF simulations with 25-km grid spacing over the SGP showed that the precipitation bias in the model could be significantly reduced when five optimal parameters identified by the MVFSA algorithm were used. The model performance was found to be sensitive to downdraft- and entrainment-related parameters and consumption time of Convective Available Potential Energy (CAPE). Simulated convective precipitation decreased as the ratio of downdraft to updraft flux increased. Larger CAPE consumption time resulted in less convective but more stratiform precipitation. The simulation using optimal parameters obtained by constraining only precipitation generated positive impact on the other output variables, such as temperature and wind. By using the optimal parameters obtained at 25-km simulation, both the magnitude and spatial pattern of simulated precipitation were improved at 12-km spatial resolution. The optimal parameters identified from the SGP region also improved the simulation of precipitation when the model domain was moved to another region with a different climate regime (i.e. the North America monsoon region). These results suggest that benefits of optimal parameters determined through vigorous mathematical procedures such as the MVFSA process are transferable across processes, spatial scales, and climatic regimes to some extent. This motivates future studies to further assess the strategies for UQ and parameter optimization at both global and regional scales.
Homotopy approach to optimal, linear quadratic, fixed architecture compensation
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1991-01-01
Optimal linear quadratic Gaussian compensators with constrained architecture are a sensible way to generate good multivariable feedback systems meeting strict implementation requirements. The optimality conditions obtained from the constrained linear quadratic Gaussian are a set of highly coupled matrix equations that cannot be solved algebraically except when the compensator is centralized and full order. An alternative to the use of general parameter optimization methods for solving the problem is to use homotopy. The benefit of the method is that it uses the solution to a simplified problem as a starting point and the final solution is then obtained by solving a simple differential equation. This paper investigates the convergence properties and the limitation of such an approach and sheds some light on the nature and the number of solutions of the constrained linear quadratic Gaussian problem. It also demonstrates the usefulness of homotopy on an example of an optimal decentralized compensator.
Trajectory optimization for the National Aerospace Plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1993-01-01
The objective of this second phase research is to investigate the optimal ascent trajectory for the National Aerospace Plane (NASP) from runway take-off to orbital insertion and address the unique problems associated with the hypersonic flight trajectory optimization. The trajectory optimization problem for an aerospace plane is a highly challenging problem because of the complexity involved. Previous work has been successful in obtaining sub-optimal trajectories by using energy-state approximation and time-scale decomposition techniques. But it is known that the energy-state approximation is not valid in certain portions of the trajectory. This research aims at employing full dynamics of the aerospace plane and emphasizing direct trajectory optimization methods. The major accomplishments of this research include the first-time development of an inverse dynamics approach in trajectory optimization which enables us to generate optimal trajectories for the aerospace plane efficiently and reliably, and general analytical solutions to constrained hypersonic trajectories that has wide application in trajectory optimization as well as in guidance and flight dynamics. Optimal trajectories in abort landing and ascent augmented with rocket propulsion and thrust vectoring control were also investigated. Motivated by this study, a new global trajectory optimization tool using continuous simulated annealing and a nonlinear predictive feedback guidance law have been under investigation and some promising results have been obtained, which may well lead to more significant development and application in the near future.
NASA Astrophysics Data System (ADS)
Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier
2018-06-01
Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.
Das, Swagatam; Mukhopadhyay, Arpan; Roy, Anwit; Abraham, Ajith; Panigrahi, Bijaya K
2011-02-01
The theoretical analysis of evolutionary algorithms is believed to be very important for understanding their internal search mechanism and thus to develop more efficient algorithms. This paper presents a simple mathematical analysis of the explorative search behavior of a recently developed metaheuristic algorithm called harmony search (HS). HS is a derivative-free real parameter optimization algorithm, and it draws inspiration from the musical improvisation process of searching for a perfect state of harmony. This paper analyzes the evolution of the population-variance over successive generations in HS and thereby draws some important conclusions regarding the explorative power of HS. A simple but very useful modification to the classical HS has been proposed in light of the mathematical analysis undertaken here. A comparison with the most recently published variants of HS and four other state-of-the-art optimization algorithms over 15 unconstrained and five constrained benchmark functions reflects the efficiency of the modified HS in terms of final accuracy, convergence speed, and robustness.
Use of constrained optimization in the conceptual design of a medium-range subsonic transport
NASA Technical Reports Server (NTRS)
Sliwa, S. M.
1980-01-01
Constrained parameter optimization was used to perform the optimal conceptual design of a medium range transport configuration. The impact of choosing a given performance index was studied, and the required income for a 15 percent return on investment was proposed as a figure of merit. A number of design constants and constraint functions were systematically varied to document the sensitivities of the optimal design to a variety of economic and technological assumptions. A comparison was made for each of the parameter variations between the baseline configuration and the optimally redesigned configuration.
Sambo, Francesco; de Oca, Marco A Montes; Di Camillo, Barbara; Toffolo, Gianna; Stützle, Thomas
2012-01-01
Reverse engineering is the problem of inferring the structure of a network of interactions between biological variables from a set of observations. In this paper, we propose an optimization algorithm, called MORE, for the reverse engineering of biological networks from time series data. The model inferred by MORE is a sparse system of nonlinear differential equations, complex enough to realistically describe the dynamics of a biological system. MORE tackles separately the discrete component of the problem, the determination of the biological network topology, and the continuous component of the problem, the strength of the interactions. This approach allows us both to enforce system sparsity, by globally constraining the number of edges, and to integrate a priori information about the structure of the underlying interaction network. Experimental results on simulated and real-world networks show that the mixed discrete/continuous optimization approach of MORE significantly outperforms standard continuous optimization and that MORE is competitive with the state of the art in terms of accuracy of the inferred networks.
Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L
2016-07-15
Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xu, Gang; Li, Ming; Mourrain, Bernard; Rabczuk, Timon; Xu, Jinlan; Bordas, Stéphane P. A.
2018-01-01
In this paper, we propose a general framework for constructing IGA-suitable planar B-spline parameterizations from given complex CAD boundaries consisting of a set of B-spline curves. Instead of forming the computational domain by a simple boundary, planar domains with high genus and more complex boundary curves are considered. Firstly, some pre-processing operations including B\\'ezier extraction and subdivision are performed on each boundary curve in order to generate a high-quality planar parameterization; then a robust planar domain partition framework is proposed to construct high-quality patch-meshing results with few singularities from the discrete boundary formed by connecting the end points of the resulting boundary segments. After the topology information generation of quadrilateral decomposition, the optimal placement of interior B\\'ezier curves corresponding to the interior edges of the quadrangulation is constructed by a global optimization method to achieve a patch-partition with high quality. Finally, after the imposition of C1=G1-continuity constraints on the interface of neighboring B\\'ezier patches with respect to each quad in the quadrangulation, the high-quality B\\'ezier patch parameterization is obtained by a C1-constrained local optimization method to achieve uniform and orthogonal iso-parametric structures while keeping the continuity conditions between patches. The efficiency and robustness of the proposed method are demonstrated by several examples which are compared to results obtained by the skeleton-based parameterization approach.
Case studies on optimization problems in MATLAB and COMSOL multiphysics by means of the livelink
NASA Astrophysics Data System (ADS)
Ozana, Stepan; Pies, Martin; Docekal, Tomas
2016-06-01
LiveLink for COMSOL is a tool that integrates COMSOL Multiphysics with MATLAB to extend one's modeling with scripting programming in the MATLAB environment. It allows user to utilize the full power of MATLAB and its toolboxes in preprocessing, model manipulation, and post processing. At first, the head script launches COMSOL with MATLAB and defines initial value of all parameters, refers to the objective function J described in the objective function and creates and runs the defined optimization task. Once the task is launches, the COMSOL model is being called in the iteration loop (from MATLAB environment by use of API interface), changing defined optimization parameters so that the objective function is minimized, using fmincon function to find a local or global minimum of constrained linear or nonlinear multivariable function. Once the minimum is found, it returns exit flag, terminates optimization and returns the optimized values of the parameters. The cooperation with MATLAB via LiveLink enhances a powerful computational environment with complex multiphysics simulations. The paper will introduce using of the LiveLink for COMSOL for chosen case studies in the field of technical cybernetics and bioengineering.
PSQP: Puzzle Solving by Quadratic Programming.
Andalo, Fernanda A; Taubin, Gabriel; Goldenstein, Siome
2017-02-01
In this article we present the first effective method based on global optimization for the reconstruction of image puzzles comprising rectangle pieces-Puzzle Solving by Quadratic Programming (PSQP). The proposed novel mathematical formulation reduces the problem to the maximization of a constrained quadratic function, which is solved via a gradient ascent approach. The proposed method is deterministic and can deal with arbitrary identical rectangular pieces. We provide experimental results showing its effectiveness when compared to state-of-the-art approaches. Although the method was developed to solve image puzzles, we also show how to apply it to the reconstruction of simulated strip-shredded documents, broadening its applicability.
NASA Astrophysics Data System (ADS)
Kang, Fei; Li, Junjie; Ma, Zhenyue
2013-02-01
Determination of the critical slip surface with the minimum factor of safety of a slope is a difficult constrained global optimization problem. In this article, an artificial bee colony algorithm with a multi-slice adjustment method is proposed for locating the critical slip surfaces of soil slopes, and the Spencer method is employed to calculate the factor of safety. Six benchmark examples are presented to illustrate the reliability and efficiency of the proposed technique, and it is also compared with some well-known or recent algorithms for the problem. The results show that the new algorithm is promising in terms of accuracy and efficiency.
Model-data fusion across ecosystems: from multisite optimizations to global simulations
NASA Astrophysics Data System (ADS)
Kuppel, S.; Peylin, P.; Maignan, F.; Chevallier, F.; Kiely, G.; Montagnani, L.; Cescatti, A.
2014-11-01
This study uses a variational data assimilation framework to simultaneously constrain a global ecosystem model with eddy covariance measurements of daily net ecosystem exchange (NEE) and latent heat (LE) fluxes from a large number of sites grouped in seven plant functional types (PFTs). It is an attempt to bridge the gap between the numerous site-specific parameter optimization works found in the literature and the generic parameterization used by most land surface models within each PFT. The present multisite approach allows deriving PFT-generic sets of optimized parameters enhancing the agreement between measured and simulated fluxes at most of the sites considered, with performances often comparable to those of the corresponding site-specific optimizations. Besides reducing the PFT-averaged model-data root-mean-square difference (RMSD) and the associated daily output uncertainty, the optimization improves the simulated CO2 balance at tropical and temperate forests sites. The major site-level NEE adjustments at the seasonal scale are reduced amplitude in C3 grasslands and boreal forests, increased seasonality in temperate evergreen forests, and better model-data phasing in temperate deciduous broadleaf forests. Conversely, the poorer performances in tropical evergreen broadleaf forests points to deficiencies regarding the modelling of phenology and soil water stress for this PFT. An evaluation with data-oriented estimates of photosynthesis (GPP - gross primary productivity) and ecosystem respiration (Reco) rates indicates distinctively improved simulations of both gross fluxes. The multisite parameter sets are then tested against CO2 concentrations measured at 53 locations around the globe, showing significant adjustments of the modelled seasonality of atmospheric CO2 concentration, whose relevance seems PFT-dependent, along with an improved interannual variability. Lastly, a global-scale evaluation with remote sensing NDVI (normalized difference vegetation index) measurements indicates an improvement of the simulated seasonal variations of the foliar cover for all considered PFTs.
Stress-Constrained Structural Topology Optimization with Design-Dependent Loads
NASA Astrophysics Data System (ADS)
Lee, Edmund
Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.
CONORBIT: constrained optimization by radial basis function interpolation in trust regions
Regis, Rommel G.; Wild, Stefan M.
2016-09-26
Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Sun, Tao; Liu, Hongbo; Yu, Hong; Chen, C L Philip
2016-06-28
The central time series crystallizes the common patterns of the set it represents. In this paper, we propose a global constrained degree-pruning dynamic programming (g(dp)²) approach to obtain the central time series through minimizing dynamic time warping (DTW) distance between two time series. The DTW matching path theory with global constraints is proved theoretically for our degree-pruning strategy, which is helpful to reduce the time complexity and computational cost. Our approach can achieve the optimal solution between two time series. An approximate method to the central time series of multiple time series [called as m_g(dp)²] is presented based on DTW barycenter averaging and our g(dp)² approach by considering hierarchically merging strategy. As illustrated by the experimental results, our approaches provide better within-group sum of squares and robustness than other relevant algorithms.
The NEWS Water Cycle Climatology
NASA Astrophysics Data System (ADS)
Rodell, M.; Beaudoing, H. K.; L'Ecuyer, T.; Olson, W. S.
2012-12-01
NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.
The NEWS Water Cycle Climatology
NASA Technical Reports Server (NTRS)
Rodell, Matthew; Beaudoing, Hiroko Kato; L'Ecuyer, Tristan; William, Olson
2012-01-01
NASA's Energy and Water Cycle Study (NEWS) program fosters collaborative research towards improved quantification and prediction of water and energy cycle consequences of climate change. In order to measure change, it is first necessary to describe current conditions. The goal of the first phase of the NEWS Water and Energy Cycle Climatology project was to develop "state of the global water cycle" and "state of the global energy cycle" assessments based on data from modern ground and space based observing systems and data integrating models. The project was a multi-institutional collaboration with more than 20 active contributors. This presentation will describe the results of the water cycle component of the first phase of the project, which include seasonal (monthly) climatologies of water fluxes over land, ocean, and atmosphere at continental and ocean basin scales. The requirement of closure of the water budget (i.e., mass conservation) at various scales was exploited to constrain the flux estimates via an optimization approach that will also be described. Further, error assessments were included with the input datasets, and we examine these in relation to inferred uncertainty in the optimized flux estimates in order to gauge our current ability to close the water budget within an expected uncertainty range.
NASA Astrophysics Data System (ADS)
Han, Jiang; Chen, Ye-Hwa; Zhao, Xiaomin; Dong, Fangfang
2018-04-01
A novel fuzzy dynamical system approach to the control design of flexible joint manipulators with mismatched uncertainty is proposed. Uncertainties of the system are assumed to lie within prescribed fuzzy sets. The desired system performance includes a deterministic phase and a fuzzy phase. First, by creatively implanting a fictitious control, a robust control scheme is constructed to render the system uniformly bounded and uniformly ultimately bounded. Both the manipulator modelling and control scheme are deterministic and not IF-THEN heuristic rules-based. Next, a fuzzy-based performance index is proposed. An optimal design problem for a control design parameter is formulated as a constrained optimisation problem. The global solution to this problem can be obtained from solving two quartic equations. The fuzzy dynamical system approach is systematic and is able to assure the deterministic performance as well as to minimise the fuzzy performance index.
Hybrid region merging method for segmentation of high-resolution remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo
2014-12-01
Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.
NASA Astrophysics Data System (ADS)
Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu
2017-09-01
A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.
A Bayesian ensemble data assimilation to constrain model parameters and land-use carbon emissions
NASA Astrophysics Data System (ADS)
Lienert, Sebastian; Joos, Fortunat
2018-05-01
A dynamic global vegetation model (DGVM) is applied in a probabilistic framework and benchmarking system to constrain uncertain model parameters by observations and to quantify carbon emissions from land-use and land-cover change (LULCC). Processes featured in DGVMs include parameters which are prone to substantial uncertainty. To cope with these uncertainties Latin hypercube sampling (LHS) is used to create a 1000-member perturbed parameter ensemble, which is then evaluated with a diverse set of global and spatiotemporally resolved observational constraints. We discuss the performance of the constrained ensemble and use it to formulate a new best-guess version of the model (LPX-Bern v1.4). The observationally constrained ensemble is used to investigate historical emissions due to LULCC (ELUC) and their sensitivity to model parametrization. We find a global ELUC estimate of 158 (108, 211) PgC (median and 90 % confidence interval) between 1800 and 2016. We compare ELUC to other estimates both globally and regionally. Spatial patterns are investigated and estimates of ELUC of the 10 countries with the largest contribution to the flux over the historical period are reported. We consider model versions with and without additional land-use processes (shifting cultivation and wood harvest) and find that the difference in global ELUC is on the same order of magnitude as parameter-induced uncertainty and in some cases could potentially even be offset with appropriate parameter choice.
Optimal synchronization in space
NASA Astrophysics Data System (ADS)
Brede, Markus
2010-02-01
In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.
Trajectory optimization and guidance law development for national aerospace plane applications
NASA Technical Reports Server (NTRS)
Calise, A. J.; Flandro, G. A.; Corban, J. E.
1988-01-01
The work completed to date is comprised of the following: a simple vehicle model representative of the aerospace plane concept in the hypersonic flight regime, fuel-optimal climb profiles for the unconstrained and dynamic pressure constrained cases generated using a reduced order dynamic model, an analytic switching condition for transition to rocket powered flight as orbital velocity is approached, simple feedback guidance laws for both the unconstrained and dynamic pressure constrained cases derived via singular perturbation theory and a nonlinear transformation technique, and numerical simulation results for ascent to orbit in the dynamic pressure constrained case.
On meeting capital requirements with a chance-constrained optimization model.
Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan
2016-01-01
This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.
NASA Technical Reports Server (NTRS)
Postma, Barry Dirk
2005-01-01
This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Learning Multirobot Hose Transportation and Deployment by Distributed Round-Robin Q-Learning.
Fernandez-Gauna, Borja; Etxeberria-Agiriano, Ismael; Graña, Manuel
2015-01-01
Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.
Critical transition in the constrained traveling salesman problem.
Andrecut, M; Ali, M K
2001-04-01
We investigate the finite size scaling of the mean optimal tour length as a function of density of obstacles in a constrained variant of the traveling salesman problem (TSP). The computational experience pointed out a critical transition (at rho(c) approximately 85%) in the dependence between the excess of the mean optimal tour length over the Held-Karp lower bound and the density of obstacles.
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
CLFs-based optimization control for a class of constrained visual servoing systems.
Song, Xiulan; Miaomiao, Fu
2017-03-01
In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
An historical survey of computational methods in optimal control.
NASA Technical Reports Server (NTRS)
Polak, E.
1973-01-01
Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.
NASA Astrophysics Data System (ADS)
Bloom, A. A.; Exbrayat, J. F.; van der Velde, I.; Peters, W.; Williams, M.
2014-12-01
Large uncertainties preside over terrestrial carbon flux estimates on a global scale. In particular, the strongly coupled dynamics between net ecosystem productivity and disturbance C losses are poorly constrained. To gain an improved understanding of ecosystem C dynamics from regional to global scale, we apply a Markov Chain Monte Carlo based model-data-fusion approach into the CArbon DAta-MOdel fraMework (CARDAMOM). We assimilate MODIS LAI and burned area, plant-trait data, and use the Harmonized World Soil Database (HWSD) and maps of above ground biomass as prior knowledge for initial conditions. We optimize model parameters based on (a) globally spanning observations and (b) ecological and dynamic constraints that force single parameter values and parameter inter-dependencies to be representative of real world processes. We determine the spatial and temporal dynamics of major terrestrial C fluxes and model parameter values on a global scale (GPP = 123 +/- 8 Pg C yr-1 & NEE = -1.8 +/- 2.7 Pg C yr-1). We further show that the incorporation of disturbance fluxes, and accounting for their instantaneous or delayed effect, is of critical importance in constraining global C cycle dynamics, particularly in the tropics. In a higher resolution case study centred on the Amazon Basin we show how fires not only trigger large instantaneous emissions of burned matter, but also how they are responsible for a sustained reduction of up to 50% in plant uptake following the depletion of biomass stocks. The combination of these two fire-induced effects leads to a 1 g C m-2 d-1reduction in the strength of the net terrestrial carbon sink. Through our simulations at regional and global scale, we advocate the need to assimilate disturbance metrics in global terrestrial carbon cycle models to bridge the gap between globally spanning terrestrial carbon cycle data and the full dynamics of the ecosystem C cycle. Disturbances are especially important because their quick occurrence may have long-term effects on ecosystems. Our synthetic simulations show that while tropical ecosystems uptake may reach pre-disturbance level after a decade, biomass stocks would most likely need more than a century to recover from a single extreme disturbance event.
Energy efficient LED layout optimization for near-uniform illumination
NASA Astrophysics Data System (ADS)
Ali, Ramy E.; Elgala, Hany
2016-09-01
In this paper, we consider the problem of designing energy efficient light emitting diodes (LEDs) layout while satisfying the illumination constraints. Towards this objective, we present a simple approach to the illumination design problem based on the concept of the virtual LED. We formulate a constrained optimization problem for minimizing the power consumption while maintaining a near-uniform illumination throughout the room. By solving the resulting constrained linear program, we obtain the number of required LEDs and the optimal output luminous intensities that achieve the desired illumination constraints.
Constrained optimization of sequentially generated entangled multiqubit states
NASA Astrophysics Data System (ADS)
Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique
2009-08-01
We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.
NASA Technical Reports Server (NTRS)
Hargrove, A.
1982-01-01
Optimal digital control of nonlinear multivariable constrained systems was studied. The optimal controller in the form of an algorithm was improved and refined by reducing running time and storage requirements. A particularly difficult system of nine nonlinear state variable equations was chosen as a test problem for analyzing and improving the controller. Lengthy analysis, modeling, computing and optimization were accomplished. A remote interactive teletype terminal was installed. Analysis requiring computer usage of short duration was accomplished using Tuskegee's VAX 11/750 system.
Global Health Initiatives of the International Oncology Community.
Al-Sukhun, Sana; de Lima Lopes, Gilberto; Gospodarowicz, Mary; Ginsburg, Ophira; Yu, Peter Paul
2017-01-01
Cancer has become one of the leading causes of morbidity and mortality in low- and middle-income countries (LMICs), where 60% of the world's total new cases are diagnosed. The challenge for effective control of cancer is multifaceted. It mandates integration of effective cancer prevention, encouraging early detection, and utilization of resource-adapted therapeutic and supportive interventions. In the resource-constrained setting, it becomes challenging to deliver each service optimally, and efficient allocation of resources is the best way to improve the outcome. This concept was translated into action through development of resource-stratified guidelines, pioneered by the Breast Health Global Initiative (BHGI), and later adopted by most oncology societies in an attempt to help physicians deliver the best possible care in a limited-resource setting. Improving outcome entails collaboration between key stakeholders, including the pharmaceutical industry, local and national health authorities, the World Health Organization (WHO), and other nonprofit, patient-oriented organizations. Therefore, we started to observe global health initiatives-led by ASCO, the Union for International Cancer Control (UICC), and the WHO-to address these challenges at the international level. This article discusses some of these initiatives.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin
2013-01-01
Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties. PMID:23531490
A New Continuous-Time Equality-Constrained Optimization to Avoid Singularity.
Quan, Quan; Cai, Kai-Yuan
2016-02-01
In equality-constrained optimization, a standard regularity assumption is often associated with feasible point methods, namely, that the gradients of constraints are linearly independent. In practice, the regularity assumption may be violated. In order to avoid such a singularity, a new projection matrix is proposed based on which a feasible point method to continuous-time, equality-constrained optimization is developed. First, the equality constraint is transformed into a continuous-time dynamical system with solutions that always satisfy the equality constraint. Second, a new projection matrix without singularity is proposed to realize the transformation. An update (or say a controller) is subsequently designed to decrease the objective function along the solutions of the transformed continuous-time dynamical system. The invariance principle is then applied to analyze the behavior of the solution. Furthermore, the proposed method is modified to address cases in which solutions do not satisfy the equality constraint. Finally, the proposed optimization approach is applied to three examples to demonstrate its effectiveness.
Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin
2013-03-26
Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties.
NASA Technical Reports Server (NTRS)
Hanks, Brantley R.; Skelton, Robert E.
1991-01-01
Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.
A bat algorithm with mutation for UCAV path planning.
Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi
2012-01-01
Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models.
Constrained multi-objective optimization of storage ring lattices
NASA Astrophysics Data System (ADS)
Husain, Riyasat; Ghodke, A. D.
2018-03-01
The storage ring lattice optimization is a class of constrained multi-objective optimization problem, where in addition to low beam emittance, a large dynamic aperture for good injection efficiency and improved beam lifetime are also desirable. The convergence and computation times are of great concern for the optimization algorithms, as various objectives are to be optimized and a number of accelerator parameters to be varied over a large span with several constraints. In this paper, a study of storage ring lattice optimization using differential evolution is presented. The optimization results are compared with two most widely used optimization techniques in accelerators-genetic algorithm and particle swarm optimization. It is found that the differential evolution produces a better Pareto optimal front in reasonable computation time between two conflicting objectives-beam emittance and dispersion function in the straight section. The differential evolution was used, extensively, for the optimization of linear and nonlinear lattices of Indus-2 for exploring various operational modes within the magnet power supply capabilities.
Thermally-Constrained Fuel-Optimal ISS Maneuvers
NASA Technical Reports Server (NTRS)
Bhatt, Sagar; Svecz, Andrew; Alaniz, Abran; Jang, Jiann-Woei; Nguyen, Louis; Spanos, Pol
2015-01-01
Optimal Propellant Maneuvers (OPMs) are now being used to rotate the International Space Station (ISS) and have saved hundreds of kilograms of propellant over the last two years. The savings are achieved by commanding the ISS to follow a pre-planned attitude trajectory optimized to take advantage of environmental torques. The trajectory is obtained by solving an optimal control problem. Prior to use on orbit, OPM trajectories are screened to ensure a static sun vector (SSV) does not occur during the maneuver. The SSV is an indicator that the ISS hardware temperatures may exceed thermal limits, causing damage to the components. In this paper, thermally-constrained fuel-optimal trajectories are presented that avoid an SSV and can be used throughout the year while still reducing propellant consumption significantly.
A Hybrid Metaheuristic DE/CS Algorithm for UCAV Three-Dimension Path Planning
Wang, Gaige; Guo, Lihong; Duan, Hong; Wang, Heqi; Liu, Luo; Shao, Mingzhen
2012-01-01
Three-dimension path planning for uninhabited combat air vehicle (UCAV) is a complicated high-dimension optimization problem, which primarily centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. A new hybrid metaheuristic differential evolution (DE) and cuckoo search (CS) algorithm is proposed to solve the UCAV three-dimension path planning problem. DE is applied to optimize the process of selecting cuckoos of the improved CS model during the process of cuckoo updating in nest. The cuckoos can act as an agent in searching the optimal UCAV path. And then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic CS. The realization procedure for this hybrid metaheuristic approach DE/CS is also presented. In order to make the optimized UCAV path more feasible, the B-Spline curve is adopted for smoothing the path. To prove the performance of this proposed hybrid metaheuristic method, it is compared with basic CS algorithm. The experiment shows that the proposed approach is more effective and feasible in UCAV three-dimension path planning than the basic CS model. PMID:23193383
Using game theory for perceptual tuned rate control algorithm in video coding
NASA Astrophysics Data System (ADS)
Luo, Jiancong; Ahmad, Ishfaq
2005-03-01
This paper proposes a game theoretical rate control technique for video compression. Using a cooperative gaming approach, which has been utilized in several branches of natural and social sciences because of its enormous potential for solving constrained optimization problems, we propose a dual-level scheme to optimize the perceptual quality while guaranteeing "fairness" in bit allocation among macroblocks. At the frame level, the algorithm allocates target bits to frames based on their coding complexity. At the macroblock level, the algorithm distributes bits to macroblocks by defining a bargaining game. Macroblocks play cooperatively to compete for shares of resources (bits) to optimize their quantization scales while considering the Human Visual System"s perceptual property. Since the whole frame is an entity perceived by viewers, macroblocks compete cooperatively under a global objective of achieving the best quality with the given bit constraint. The major advantage of the proposed approach is that the cooperative game leads to an optimal and fair bit allocation strategy based on the Nash Bargaining Solution. Another advantage is that it allows multi-objective optimization with multiple decision makers (macroblocks). The simulation results testify the algorithm"s ability to achieve accurate bit rate with good perceptual quality, and to maintain a stable buffer level.
A hybrid metaheuristic DE/CS algorithm for UCAV three-dimension path planning.
Wang, Gaige; Guo, Lihong; Duan, Hong; Wang, Heqi; Liu, Luo; Shao, Mingzhen
2012-01-01
Three-dimension path planning for uninhabited combat air vehicle (UCAV) is a complicated high-dimension optimization problem, which primarily centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. A new hybrid metaheuristic differential evolution (DE) and cuckoo search (CS) algorithm is proposed to solve the UCAV three-dimension path planning problem. DE is applied to optimize the process of selecting cuckoos of the improved CS model during the process of cuckoo updating in nest. The cuckoos can act as an agent in searching the optimal UCAV path. And then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic CS. The realization procedure for this hybrid metaheuristic approach DE/CS is also presented. In order to make the optimized UCAV path more feasible, the B-Spline curve is adopted for smoothing the path. To prove the performance of this proposed hybrid metaheuristic method, it is compared with basic CS algorithm. The experiment shows that the proposed approach is more effective and feasible in UCAV three-dimension path planning than the basic CS model.
Stability-Constrained Aerodynamic Shape Optimization with Applications to Flying Wings
NASA Astrophysics Data System (ADS)
Mader, Charles Alexander
A set of techniques is developed that allows the incorporation of flight dynamics metrics as an additional discipline in a high-fidelity aerodynamic optimization. Specifically, techniques for including static stability constraints and handling qualities constraints in a high-fidelity aerodynamic optimization are demonstrated. These constraints are developed from stability derivative information calculated using high-fidelity computational fluid dynamics (CFD). Two techniques are explored for computing the stability derivatives from CFD. One technique uses an automatic differentiation adjoint technique (ADjoint) to efficiently and accurately compute a full set of static and dynamic stability derivatives from a single steady solution. The other technique uses a linear regression method to compute the stability derivatives from a quasi-unsteady time-spectral CFD solution, allowing for the computation of static, dynamic and transient stability derivatives. Based on the characteristics of the two methods, the time-spectral technique is selected for further development, incorporated into an optimization framework, and used to conduct stability-constrained aerodynamic optimization. This stability-constrained optimization framework is then used to conduct an optimization study of a flying wing configuration. This study shows that stability constraints have a significant impact on the optimal design of flying wings and that, while static stability constraints can often be satisfied by modifying the airfoil profiles of the wing, dynamic stability constraints can require a significant change in the planform of the aircraft in order for the constraints to be satisfied.
Global velocity constrained cloud motion prediction for short-term solar forecasting
NASA Astrophysics Data System (ADS)
Chen, Yanjun; Li, Wei; Zhang, Chongyang; Hu, Chuanping
2016-09-01
Cloud motion is the primary reason for short-term solar power output fluctuation. In this work, a new cloud motion estimation algorithm using a global velocity constraint is proposed. Compared to the most used Particle Image Velocity (PIV) algorithm, which assumes the homogeneity of motion vectors, the proposed method can capture the accurate motion vector for each cloud block, including both the motional tendency and morphological changes. Specifically, global velocity derived from PIV is first calculated, and then fine-grained cloud motion estimation can be achieved by global velocity based cloud block researching and multi-scale cloud block matching. Experimental results show that the proposed global velocity constrained cloud motion prediction achieves comparable performance to the existing PIV and filtered PIV algorithms, especially in a short prediction horizon.
Resolving the global transpiration flux is critical to constraining global carbon cycle models because carbon uptake by photosynthesis in terrestrial plants (Gross Primary Productivity, GPP) is directly related to water lost through transpiration. Quantifying GPP globally is cha...
Liu, Qingshan; Guo, Zhishan; Wang, Jun
2012-02-01
In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Klein, E.; Masson, F.; Duputel, Z.; Yavasoglu, H.; Agram, P. S.
2016-12-01
Over the last two decades, the densification of GPS networks and the development of new radar satellites offered an unprecedented opportunity to study crustal deformation due to faulting. Yet, submarine strike slip fault segments remain a major issue, especially when the landscape appears unfavorable to the use of SAR measurements. It is the case of the North Anatolian fault segments located in the Main Marmara Sea, that remain unbroken ever since the Mw7.4 earthquake of Izmit in 1999, which ended a eastward migrating seismic sequence of Mw > 7 earthquakes. Located directly offshore Istanbul, evaluation of seismic hazard appears capital. But a strong controversy remains over whether these segments are accumulating strain and are likely to experience a major earthquake, or are creeping, resulting both from the simplicity of current geodetic models and the scarcity of geodetic data. We indeed show that 2D infinite fault models cannot account for the complexity of the Marmara fault segments. But current geodetic data in the western region of Istanbul are also insufficient to invert for the coupling using a 3D geometry of the fault. Therefore, we implement a global optimization procedure aiming at identifying the most favorable distribution of GPS stations to explore the strain accumulation. We present here the results of this procedure that allows to determine both the optimal number and location of the new stations. We show that a denser terrestrial survey network can indeed locally improve the resolution on the shallower part of the fault, even more efficiently with permanent stations. But data closer from the fault, only possible by submarine measurements, remain necessary to properly constrain the fault behavior and its potential along strike coupling variations.
NASA Technical Reports Server (NTRS)
Tapia, R. A.; Vanrooy, D. L.
1976-01-01
A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.
Necessary conditions for the optimality of variable rate residual vector quantizers
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
Residual vector quantization (RVQ), or multistage VQ, as it is also called, has recently been shown to be a competitive technique for data compression. The competitive performance of RVQ reported in results from the joint optimization of variable rate encoding and RVQ direct-sum code books. In this paper, necessary conditions for the optimality of variable rate RVQ's are derived, and an iterative descent algorithm based on a Lagrangian formulation is introduced for designing RVQ's having minimum average distortion subject to an entropy constraint. Simulation results for these entropy-constrained RVQ's (EC-RVQ's) are presented for memory less Gaussian, Laplacian, and uniform sources. A Gauss-Markov source is also considered. The performance is superior to that of entropy-constrained scalar quantizers (EC-SQ's) and practical entropy-constrained vector quantizers (EC-VQ's), and is competitive with that of some of the best source coding techniques that have appeared in the literature.
Energetic Materials Optimization via Constrained Search
2015-06-01
steps. 3. Optimization Methodology Our optimization problem is formulated as a constrained maximization: max x∈CCS P (x) s.t. : TED ( x )− 9.75 ≥ 0 SV (x)− 9...0 5− SA(x) ≥ 0, (1) where TED ( x ) is the total energy of detonation (TED) of compound x from the chosen chemical subspace (CCS) of chemical compound...max problem, max x∈CCS min λ∈R3+ P (x)− λTC(x), (2) where C(x) is the vector of constraint violations, i.e., η(9.75 − TED ( x )), η(9 − SV (x)), η(SA(x
Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D
2017-01-25
Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open source, available under an MIT license, and can be installed using the Julia package manager from the JuPOETs GitHub repository.
Empirical Model of Precipitating Ion Oval
NASA Astrophysics Data System (ADS)
Goldstein, Jerry
2017-10-01
In this brief technical report published maps of ion integral flux are used to constrain an empirical model of the precipitating ion oval. The ion oval is modeled as a Gaussian function of ionospheric latitude that depends on local time and the Kp geomagnetic index. The three parameters defining this function are the centroid latitude, width, and amplitude. The local time dependences of these three parameters are approximated by Fourier series expansions whose coefficients are constrained by the published ion maps. The Kp dependence of each coefficient is modeled by a linear fit. Optimization of the number of terms in the expansion is achieved via minimization of the global standard deviation between the model and the published ion map at each Kp. The empirical model is valid near the peak flux of the auroral oval; inside its centroid region the model reproduces the published ion maps with standard deviations of less than 5% of the peak integral flux. On the subglobal scale, average local errors (measured as a fraction of the point-to-point integral flux) are below 30% in the centroid region. Outside its centroid region the model deviates significantly from the H89 integral flux maps. The model's performance is assessed by comparing it with both local and global data from a 17 April 2002 substorm event. The model can reproduce important features of the macroscale auroral region but none of its subglobal structure, and not immediately following a substorm.
Biyikli, Emre; To, Albert C.
2015-01-01
A new topology optimization method called the Proportional Topology Optimization (PTO) is presented. As a non-sensitivity method, PTO is simple to understand, easy to implement, and is also efficient and accurate at the same time. It is implemented into two MATLAB programs to solve the stress constrained and minimum compliance problems. Descriptions of the algorithm and computer programs are provided in detail. The method is applied to solve three numerical examples for both types of problems. The method shows comparable efficiency and accuracy with an existing optimality criteria method which computes sensitivities. Also, the PTO stress constrained algorithm and minimum compliance algorithm are compared by feeding output from one algorithm to the other in an alternative manner, where the former yields lower maximum stress and volume fraction but higher compliance compared to the latter. Advantages and disadvantages of the proposed method and future works are discussed. The computer programs are self-contained and publicly shared in the website www.ptomethod.org. PMID:26678849
Bacanin, Nebojsa; Tuba, Milan
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.
2014-01-01
Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645
Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems
NASA Astrophysics Data System (ADS)
Watkins, Edward Francis
1995-01-01
A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.
Research Trends in Wireless Visual Sensor Networks When Exploiting Prioritization
Costa, Daniel G.; Guedes, Luiz Affonso; Vasques, Francisco; Portugal, Paulo
2015-01-01
The development of wireless sensor networks for control and monitoring functions has created a vibrant investigation scenario, where many critical topics, such as communication efficiency and energy consumption, have been investigated in the past few years. However, when sensors are endowed with low-power cameras for visual monitoring, a new scope of challenges is raised, demanding new research efforts. In this context, the resource-constrained nature of sensor nodes has demanded the use of prioritization approaches as a practical mechanism to lower the transmission burden of visual data over wireless sensor networks. Many works in recent years have considered local-level prioritization parameters to enhance the overall performance of those networks, but global-level policies can potentially achieve better results in terms of visual monitoring efficiency. In this paper, we make a broad review of some recent works on priority-based optimizations in wireless visual sensor networks. Moreover, we envisage some research trends when exploiting prioritization, potentially fostering the development of promising optimizations for wireless sensor networks composed of visual sensors. PMID:25599425
On the ability of a global atmospheric inversion to constrain variations of CO2 fluxes over Amazonia
NASA Astrophysics Data System (ADS)
Molina, L.; Broquet, G.; Imbach, P.; Chevallier, F.; Poulter, B.; Bonal, D.; Burban, B.; Ramonet, M.; Gatti, L. V.; Wofsy, S. C.; Munger, J. W.; Dlugokencky, E.; Ciais, P.
2015-01-01
The exchanges of carbon, water, and energy between the atmosphere and the Amazon Basin have global implications for current and future climate. Here, the global atmospheric inversion system of the Monitoring of Atmospheric Composition and Climate service (MACC) was used to further study the seasonal and interannual variations of biogenic CO2 fluxes in Amazonia. The system assimilated surface measurements of atmospheric CO2 mole fractions made over more than 100 sites over the globe into an atmospheric transport model. This study added four surface stations located in tropical South America, a region poorly covered by CO2 observations. The estimates of net ecosystem exchange (NEE) optimized by the inversion were compared to independent estimates of NEE upscaled from eddy-covariance flux measurements in Amazonia, and against reports on the seasonal and interannual variations of the land sink in South America from the scientific literature. We focused on the impact of the interannual variation of the strong droughts in 2005 and 2010 (due to severe and longer-than-usual dry seasons), and of the extreme rainfall conditions registered in 2009. The spatial variations of the seasonal and interannual variability of optimized NEE were also investigated. While the inversion supported the assumption of strong spatial heterogeneity of these variations, the results revealed critical limitations that prevent global inversion frameworks from capturing the data-driven seasonal patterns of fluxes across Amazonia. In particular, it highlighted issues due to the configuration of the observation network in South America and the lack of continuity of the measurements. However, some robust patterns from the inversion seemed consistent with the abnormal moisture conditions in 2009.
Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang
2018-06-01
Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.
Toward Overcoming the Local Minimum Trap in MFBD
2015-07-14
the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind
A greedy algorithm for species selection in dimension reduction of combustion chemistry
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Ren, Zhuyin; Pope, Stephen B.
2010-09-01
Computational calculations of combustion problems involving large numbers of species and reactions with a detailed description of the chemistry can be very expensive. Numerous dimension reduction techniques have been developed in the past to reduce the computational cost. In this paper, we consider the rate controlled constrained-equilibrium (RCCE) dimension reduction method, in which a set of constrained species is specified. For a given number of constrained species, the 'optimal' set of constrained species is that which minimizes the dimension reduction error. The direct determination of the optimal set is computationally infeasible, and instead we present a greedy algorithm which aims at determining a 'good' set of constrained species; that is, one leading to near-minimal dimension reduction error. The partially-stirred reactor (PaSR) involving methane premixed combustion with chemistry described by the GRI-Mech 1.2 mechanism containing 31 species is used to test the algorithm. Results on dimension reduction errors for different sets of constrained species are presented to assess the effectiveness of the greedy algorithm. It is shown that the first four constrained species selected using the proposed greedy algorithm produce lower dimension reduction error than constraints on the major species: CH4, O2, CO2 and H2O. It is also shown that the first ten constrained species selected using the proposed greedy algorithm produce a non-increasing dimension reduction error with every additional constrained species; and produce the lowest dimension reduction error in many cases tested over a wide range of equivalence ratios, pressures and initial temperatures.
Yang, C; Jiang, W; Chen, D-H; Adiga, U; Ng, E G; Chiu, W
2009-03-01
The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.
Domain decomposition in time for PDE-constrained optimization
Barker, Andrew T.; Stoll, Martin
2015-08-28
Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.
Comments on "The multisynapse neural network and its application to fuzzy clustering".
Yu, Jian; Hao, Pengwei
2005-05-01
In the above-mentioned paper, Wei and Fahn proposed a neural architecture, the multisynapse neural network, to solve constrained optimization problems including high-order, logarithmic, and sinusoidal forms, etc. As one of its main applications, a fuzzy bidirectional associative clustering network (FBACN) was proposed for fuzzy-partition clustering according to the objective-functional method. The connection between the objective-functional-based fuzzy c-partition algorithms and FBACN is the Lagrange multiplier approach. Unfortunately, the Lagrange multiplier approach was incorrectly applied so that FBACN does not equivalently minimize its corresponding constrained objective-function. Additionally, Wei and Fahn adopted traditional definition of fuzzy c-partition, which is not satisfied by FBACN. Therefore, FBACN can not solve constrained optimization problems, either.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich
2018-05-01
Application of the multi-arm space robot will be more effective than single arm especially when the target is tumbling. This paper investigates the application of particle swarm optimization (PSO) strategy to coordinated trajectory planning of the dual-arm space robot in free-floating mode. In order to overcome the dynamics singularities issue, the direct kinematics equations in conjunction with constrained PSO are employed for coordinated trajectory planning of dual-arm space robot. The joint trajectories are parametrized with Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for coordinated trajectory planning of two kinematically redundant manipulators mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.
Deterministic Reconfigurable Control Design for the X-33 Vehicle
NASA Technical Reports Server (NTRS)
Wagner, Elaine A.; Burken, John J.; Hanson, Curtis E.; Wohletz, Jerry M.
1998-01-01
In the event of a control surface failure, the purpose of a reconfigurable control system is to redistribute the control effort among the remaining working surfaces such that satisfactory stability and performance are retained. Four reconfigurable control design methods were investigated for the X-33 vehicle: Redistributed Pseudo-Inverse, General Constrained Optimization, Automated Failure Dependent Gain Schedule, and an Off-line Nonlinear General Constrained Optimization. The Off-line Nonlinear General Constrained Optimization approach was chosen for implementation on the X-33. Two example failures are shown, a right outboard elevon jam at 25 deg. at a Mach 3 entry condition, and a left rudder jam at 30 degrees. Note however, that reconfigurable control laws have been designed for the entire flight envelope. Comparisons between responses with the nominal controller and reconfigurable controllers show the benefits of reconfiguration. Single jam aerosurface failures were considered, and failure detection and identification is considered accomplished in the actuator controller. The X-33 flight control system will incorporate reconfigurable flight control in the baseline system.
Vehicle routing problem with time windows using natural inspired algorithms
NASA Astrophysics Data System (ADS)
Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.
2018-03-01
Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Constrained optimization of image restoration filters
NASA Technical Reports Server (NTRS)
Riemer, T. E.; Mcgillem, C. D.
1973-01-01
A linear shift-invariant preprocessing technique is described which requires no specific knowledge of the image parameters and which is sufficiently general to allow the effective radius of the composite imaging system to be minimized while constraining other system parameters to remain within specified limits.
A Bat Algorithm with Mutation for UCAV Path Planning
Wang, Gaige; Guo, Lihong; Duan, Hong; Liu, Luo; Wang, Heqi
2012-01-01
Path planning for uninhabited combat air vehicle (UCAV) is a complicated high dimension optimization problem, which mainly centralizes on optimizing the flight route considering the different kinds of constrains under complicated battle field environments. Original bat algorithm (BA) is used to solve the UCAV path planning problem. Furthermore, a new bat algorithm with mutation (BAM) is proposed to solve the UCAV path planning problem, and a modification is applied to mutate between bats during the process of the new solutions updating. Then, the UCAV can find the safe path by connecting the chosen nodes of the coordinates while avoiding the threat areas and costing minimum fuel. This new approach can accelerate the global convergence speed while preserving the strong robustness of the basic BA. The realization procedure for original BA and this improved metaheuristic approach BAM is also presented. To prove the performance of this proposed metaheuristic method, BAM is compared with BA and other population-based optimization methods, such as ACO, BBO, DE, ES, GA, PBIL, PSO, and SGA. The experiment shows that the proposed approach is more effective and feasible in UCAV path planning than the other models. PMID:23365518
Hamzehpour, Hossein; Rasaei, M Reza; Sahimi, Muhammad
2007-05-01
We describe a method for the development of the optimal spatial distributions of the porosity phi and permeability k of a large-scale porous medium. The optimal distributions are constrained by static and dynamic data. The static data that we utilize are limited data for phi and k, which the method honors in the optimal model and utilizes their correlation functions in the optimization process. The dynamic data include the first-arrival (FA) times, at a number of receivers, of seismic waves that have propagated in the porous medium, and the time-dependent production rates of a fluid that flows in the medium. The method combines the simulated-annealing method with a simulator that solves numerically the three-dimensional (3D) acoustic wave equation and computes the FA times, and a second simulator that solves the 3D governing equation for the fluid's pressure as a function of time. To our knowledge, this is the first time that an optimization method has been developed to determine simultaneously the global minima of two distinct total energy functions. As a stringent test of the method's accuracy, we solve for flow of two immiscible fluids in the same porous medium, without using any data for the two-phase flow problem in the optimization process. We show that the optimal model, in addition to honoring the data, also yields accurate spatial distributions of phi and k, as well as providing accurate quantitative predictions for the single- and two-phase flow problems. The efficiency of the computations is discussed in detail.
Global biogeography of human infectious diseases.
Murray, Kris A; Preston, Nicholas; Allen, Toph; Zambrana-Torrelio, Carlos; Hosseini, Parviez R; Daszak, Peter
2015-10-13
The distributions of most infectious agents causing disease in humans are poorly resolved or unknown. However, poorly known and unknown agents contribute to the global burden of disease and will underlie many future disease risks. Existing patterns of infectious disease co-occurrence could thus play a critical role in resolving or anticipating current and future disease threats. We analyzed the global occurrence patterns of 187 human infectious diseases across 225 countries and seven epidemiological classes (human-specific, zoonotic, vector-borne, non-vector-borne, bacterial, viral, and parasitic) to show that human infectious diseases exhibit distinct spatial grouping patterns at a global scale. We demonstrate, using outbreaks of Ebola virus as a test case, that this spatial structuring provides an untapped source of prior information that could be used to tighten the focus of a range of health-related research and management activities at early stages or in data-poor settings, including disease surveillance, outbreak responses, or optimizing pathogen discovery. In examining the correlates of these spatial patterns, among a range of geographic, epidemiological, environmental, and social factors, mammalian biodiversity was the strongest predictor of infectious disease co-occurrence overall and for six of the seven disease classes examined, giving rise to a striking congruence between global pathogeographic and "Wallacean" zoogeographic patterns. This clear biogeographic signal suggests that infectious disease assemblages remain fundamentally constrained in their distributions by ecological barriers to dispersal or establishment, despite the homogenizing forces of globalization. Pathogeography thus provides an overarching context in which other factors promoting infectious disease emergence and spread are set.
Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouri, Drew Philip; Surowiec, Thomas M.
Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less
Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization
Kouri, Drew Philip; Surowiec, Thomas M.
2018-06-05
Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less
Liu, Xing; Hou, Kun Mean; de Vaulx, Christophe; Xu, Jun; Yang, Jianfeng; Zhou, Haiying; Shi, Hongling; Zhou, Peng
2015-01-01
Memory and energy optimization strategies are essential for the resource-constrained wireless sensor network (WSN) nodes. In this article, a new memory-optimized and energy-optimized multithreaded WSN operating system (OS) LiveOS is designed and implemented. Memory cost of LiveOS is optimized by using the stack-shifting hybrid scheduling approach. Different from the traditional multithreaded OS in which thread stacks are allocated statically by the pre-reservation, thread stacks in LiveOS are allocated dynamically by using the stack-shifting technique. As a result, memory waste problems caused by the static pre-reservation can be avoided. In addition to the stack-shifting dynamic allocation approach, the hybrid scheduling mechanism which can decrease both the thread scheduling overhead and the thread stack number is also implemented in LiveOS. With these mechanisms, the stack memory cost of LiveOS can be reduced more than 50% if compared to that of a traditional multithreaded OS. Not is memory cost optimized, but also the energy cost is optimized in LiveOS, and this is achieved by using the multi-core “context aware” and multi-core “power-off/wakeup” energy conservation approaches. By using these approaches, energy cost of LiveOS can be reduced more than 30% when compared to the single-core WSN system. Memory and energy optimization strategies in LiveOS not only prolong the lifetime of WSN nodes, but also make the multithreaded OS feasible to run on the memory-constrained WSN nodes. PMID:25545264
Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne
2007-01-01
The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max).
Mixed-Strategy Chance Constrained Optimal Control
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J.
2013-01-01
This paper presents a novel chance constrained optimal control (CCOC) algorithm that chooses a control action probabilistically. A CCOC problem is to find a control input that minimizes the expected cost while guaranteeing that the probability of violating a set of constraints is below a user-specified threshold. We show that a probabilistic control approach, which we refer to as a mixed control strategy, enables us to obtain a cost that is better than what deterministic control strategies can achieve when the CCOC problem is nonconvex. The resulting mixed-strategy CCOC problem turns out to be a convexification of the original nonconvex CCOC problem. Furthermore, we also show that a mixed control strategy only needs to "mix" up to two deterministic control actions in order to achieve optimality. Building upon an iterative dual optimization, the proposed algorithm quickly converges to the optimal mixed control strategy with a user-specified tolerance.
NASA Astrophysics Data System (ADS)
Louie, J. N.; Basler-Reeder, K.; Kent, G. M.; Pullammanappallil, S. K.
2015-12-01
Simultaneous joint seismic-gravity optimization improves P-wave velocity models in areas with sharp lateral velocity contrasts. Optimization is achieved using simulated annealing, a metaheuristic global optimization algorithm that does not require an accurate initial model. Balancing the seismic-gravity objective function is accomplished by a novel approach based on analysis of Pareto charts. Gravity modeling uses a newly developed convolution algorithm, while seismic modeling utilizes the highly efficient Vidale eikonal equation traveltime generation technique. Synthetic tests show that joint optimization improves velocity model accuracy and provides velocity control below the deepest headwave raypath. Detailed first arrival picking followed by trial velocity modeling remediates inconsistent data. We use a set of highly refined first arrival picks to compare results of a convergent joint seismic-gravity optimization to the Plotrefa™ and SeisOpt® Pro™ velocity modeling packages. Plotrefa™ uses a nonlinear least squares approach that is initial model dependent and produces shallow velocity artifacts. SeisOpt® Pro™ utilizes the simulated annealing algorithm and is limited to depths above the deepest raypath. Joint optimization increases the depth of constrained velocities, improving reflector coherency at depth. Kirchoff prestack depth migrations reveal that joint optimization ameliorates shallow velocity artifacts caused by limitations in refraction ray coverage. Seismic and gravity data from the San Emidio Geothermal field of the northwest Basin and Range province demonstrate that joint optimization changes interpretation outcomes. The prior shallow-valley interpretation gives way to a deep valley model, while shallow antiformal reflectors that could have been interpreted as antiformal folds are flattened. Furthermore, joint optimization provides a clearer image of the rangefront fault. This technique can readily be applied to existing datasets and could replace the existing strategy of forward modeling to match gravity data.
H2, fixed architecture, control design for large scale systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1990-01-01
The H2, fixed architecture, control problem is a classic linear quadratic Gaussian (LQG) problem whose solution is constrained to be a linear time invariant compensator with a decentralized processing structure. The compensator can be made of p independent subcontrollers, each of which has a fixed order and connects selected sensors to selected actuators. The H2, fixed architecture, control problem allows the design of simplified feedback systems needed to control large scale systems. Its solution becomes more complicated, however, as more constraints are introduced. This work derives the necessary conditions for optimality for the problem and studies their properties. It is found that the filter and control problems couple when the architecture constraints are introduced, and that the different subcontrollers must be coordinated in order to achieve global system performance. The problem requires the simultaneous solution of highly coupled matrix equations. The use of homotopy is investigated as a numerical tool, and its convergence properties studied. It is found that the general constrained problem may have multiple stabilizing solutions, and that these solutions may be local minima or saddle points for the quadratic cost. The nature of the solution is not invariant when the parameters of the system are changed. Bifurcations occur, and a solution may continuously transform into a nonstabilizing compensator. Using a modified homotopy procedure, fixed architecture compensators are derived for models of large flexible structures to help understand the properties of the constrained solutions and compare them to the corresponding unconstrained ones.
A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.
Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa
2018-02-01
Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.
Resource Effective Strategies to Prevent and Treat Cardiovascular Disease
Schwalm, Jon-David; McKee, Martin; Huffman, Mark D.; Yusuf, Salim
2016-01-01
Cardiovascular disease (CVD) is the leading cause of global deaths, with the majority occurring in low- and middle-income countries (LMIC). The primary and secondary prevention of CVD is suboptimal throughout the world, but the evidence-practice gaps are much more pronounced in LMIC. Barriers at the patient, health-care provider, and health system level prevent the implementation of optimal primary and secondary prevention. Identification of the particular barriers that exist in resource-constrained settings is necessary to inform effective strategies to reduce the identified evidence-practice gaps. Furthermore, targeting modifiable factors that contribute most significantly to the global burden of CVD, including tobacco use, hypertension, and secondary prevention for CVD will lead to the biggest gains in mortality reduction. We review a select number of novel, resource-efficient strategies to reduce premature mortality from CVD, including: (1) effective measures for tobacco control; (2) implementation of simplified screening and management algorithms for those with or at risk of CVD, (3) increasing the availability and affordability of simplified and cost-effective treatment regimens including combination CVD preventive drug therapy, and (4) simplified delivery of health care through task-sharing (non-physician health workers) and optimizing self-management (treatment supporters). Developing and deploying systems of care that address barriers related to the above, will lead to substantial reductions in CVD and related mortality. PMID:26903017
Keresztes, Janos C; John Koshel, R; D'huys, Karlien; De Ketelaere, Bart; Audenaert, Jan; Goos, Peter; Saeys, Wouter
2016-12-26
A novel meta-heuristic approach for minimizing nonlinear constrained problems is proposed, which offers tolerance information during the search for the global optimum. The method is based on the concept of design and analysis of computer experiments combined with a novel two phase design augmentation (DACEDA), which models the entire merit space using a Gaussian process, with iteratively increased resolution around the optimum. The algorithm is introduced through a series of cases studies with increasing complexity for optimizing uniformity of a short-wave infrared (SWIR) hyperspectral imaging (HSI) illumination system (IS). The method is first demonstrated for a two-dimensional problem consisting of the positioning of analytical isotropic point sources. The method is further applied to two-dimensional (2D) and five-dimensional (5D) SWIR HSI IS versions using close- and far-field measured source models applied within the non-sequential ray-tracing software FRED, including inherent stochastic noise. The proposed method is compared to other heuristic approaches such as simplex and simulated annealing (SA). It is shown that DACEDA converges towards a minimum with 1 % improvement compared to simplex and SA, and more importantly requiring only half the number of simulations. Finally, a concurrent tolerance analysis is done within DACEDA for to the five-dimensional case such that further simulations are not required.
Enhancing Polyhedral Relaxations for Global Optimization
ERIC Educational Resources Information Center
Bao, Xiaowei
2009-01-01
During the last decade, global optimization has attracted a lot of attention due to the increased practical need for obtaining global solutions and the success in solving many global optimization problems that were previously considered intractable. In general, the central question of global optimization is to find an optimal solution to a given…
Solution techniques for transient stability-constrained optimal power flow – Part II
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu; ...
2017-06-28
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
Solution techniques for transient stability-constrained optimal power flow – Part II
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Guangchao; Abhyankar, Shrirang; Wang, Xiaoyu
Transient stability-constrained optimal power flow is an important emerging problem with power systems pushed to the limits for economic benefits, dense and larger interconnected systems, and reduced inertia due to expected proliferation of renewable energy resources. In this study, two more approaches: single machine equivalent and computational intelligence are presented. Also discussed are various application areas, and future directions in this research area. In conclusion, a comprehensive resource for the available literature, publicly available test systems, and relevant numerical libraries is also provided.
Necessary optimality conditions for infinite dimensional state constrained control problems
NASA Astrophysics Data System (ADS)
Frankowska, H.; Marchini, E. M.; Mazzola, M.
2018-06-01
This paper is concerned with first order necessary optimality conditions for state constrained control problems in separable Banach spaces. Assuming inward pointing conditions on the constraint, we give a simple proof of Pontryagin maximum principle, relying on infinite dimensional neighboring feasible trajectories theorems proved in [20]. Further, we provide sufficient conditions guaranteeing normality of the maximum principle. We work in the abstract semigroup setting, but nevertheless we apply our results to several concrete models involving controlled PDEs. Pointwise state constraints (as positivity of the solutions) are allowed.
NASA Astrophysics Data System (ADS)
Ding, Hao; Chao, Benjamin F.
2017-02-01
The mantle anelasticity plays an important role in Earth's interior dynamics. Here we seek to determine the lower-mantle anelasticity through the solution of the complex Love numbers at the Chandler wobble period. The Love numbers h21, l21, δ21 and k21 are obtained in the frequency domain by dividing off the observed polar motion, or more specifically the pole tide potential, from the observed GPS 3-D displacement field and SG gravity variation. The latter signals are obtained through the array processing method of OSE (optimal sequence estimation) that results in greatly enhanced signals to be extracted from global array data. The resultant Love number estimates h21 = 0.6248 (± 5 e - 4) - 0.013 (± 5 e - 3) i, l21 = 0.0904 (± 8 e - 4) - 0.0008 (± 2 e - 3) i, δ21 = 1.156 (± 2 e - 3) - 0.003 (± 1 e - 3) i and k21 = 0.3125 (± 2 e - 3) - 0.0069 (± 3 e - 3) i are thus well-constrained in comparison to past estimates that vary considerably. They further lead to estimates of the corresponding mantle anelastic parameters fr and fi, which in turn determines, under the single-absorption band assumption, the dispersion exponent of α = 0.21 ± 0.02 with respect to the reference frequency of 5 mHz. We believe our estimate is robust and hence can better constrain the mantle anelasticity and attenuation models of the Earth interior.
NASA Astrophysics Data System (ADS)
Liu, Suihan; Burgueño, Rigoberto
2016-12-01
Axially compressed bilaterally constrained columns, which can attain multiple snap-through buckling events in their elastic postbuckling response, can be used as energy concentrators and mechanical triggers to transform external quasi-static displacement input to local high-rate motions and excite vibration-based piezoelectric transducers for energy harvesting devices. However, the buckling location with highest kinetic energy release along the element, and where piezoelectric oscillators should be optimally placed, cannot be controlled or isolated due to the changing buckling configurations. This paper proposes the concept of stiffness variations along the column to gain control of the buckling location for optimal placement of piezoelectric transducers. Prototyped non-prismatic columns with piece-wise varying thickness were fabricated through 3D printing for experimental characterization and numerical simulations were conducted using the finite element method. A simple theoretical model was also developed based on the stationary potential energy principle for predicting the critical line contact segment that triggers snap-through events and the buckling morphologies as compression proceeds. Results confirm that non-prismatic column designs allow control of the buckling location in the elastic postbuckling regime. Compared to prismatic columns, non-prismatic designs can attain a concentrated kinetic energy release spot and a higher number of snap-buckling mode transitions under the same global strain. The direct relation between the column’s dynamic response and the output voltage from piezoelectric oscillator transducers allows the tailorable postbuckling response of non-prismatic columns to be used as multi-stable energy concentrators with enhanced performance in micro-energy harvesters.
NASA Technical Reports Server (NTRS)
Voigt, Kerstin
1992-01-01
We present MENDER, a knowledge based system that implements software design techniques that are specialized to automatically compile generate-and-patch problem solvers that satisfy global resource assignments problems. We provide empirical evidence of the superior performance of generate-and-patch over generate-and-test: even with constrained generation, for a global constraint in the domain of '2D-floorplanning'. For a second constraint in '2D-floorplanning' we show that even when it is possible to incorporate the constraint into a constrained generator, a generate-and-patch problem solver may satisfy the constraint more rapidly. We also briefly summarize how an extended version of our system applies to a constraint in the domain of 'multiprocessor scheduling'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less
Thomas, R. Quinn; Brooks, Evan B.; Jersild, Annika L.; ...
2017-07-26
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model–data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions,more » DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO 2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 10 5 km 2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO 2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO 2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO 2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO 2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.« less
NASA Astrophysics Data System (ADS)
Quinn Thomas, R.; Brooks, Evan B.; Jersild, Annika L.; Ward, Eric J.; Wynne, Randolph H.; Albaugh, Timothy J.; Dinon-Aldridge, Heather; Burkhart, Harold E.; Domec, Jean-Christophe; Fox, Thomas R.; Gonzalez-Benecke, Carlos A.; Martin, Timothy A.; Noormets, Asko; Sampson, David A.; Teskey, Robert O.
2017-07-01
Predicting how forest carbon cycling will change in response to climate change and management depends on the collective knowledge from measurements across environmental gradients, ecosystem manipulations of global change factors, and mathematical models. Formally integrating these sources of knowledge through data assimilation, or model-data fusion, allows the use of past observations to constrain model parameters and estimate prediction uncertainty. Data assimilation (DA) focused on the regional scale has the opportunity to integrate data from both environmental gradients and experimental studies to constrain model parameters. Here, we introduce a hierarchical Bayesian DA approach (Data Assimilation to Predict Productivity for Ecosystems and Regions, DAPPER) that uses observations of carbon stocks, carbon fluxes, water fluxes, and vegetation dynamics from loblolly pine plantation ecosystems across the southeastern US to constrain parameters in a modified version of the Physiological Principles Predicting Growth (3-PG) forest growth model. The observations included major experiments that manipulated atmospheric carbon dioxide (CO2) concentration, water, and nutrients, along with nonexperimental surveys that spanned environmental gradients across an 8.6 × 105 km2 region. We optimized regionally representative posterior distributions for model parameters, which dependably predicted data from plots withheld from the data assimilation. While the mean bias in predictions of nutrient fertilization experiments, irrigation experiments, and CO2 enrichment experiments was low, future work needs to focus modifications to model structures that decrease the bias in predictions of drought experiments. Predictions of how growth responded to elevated CO2 strongly depended on whether ecosystem experiments were assimilated and whether the assimilated field plots in the CO2 study were allowed to have different mortality parameters than the other field plots in the region. We present predictions of stem biomass productivity under elevated CO2, decreased precipitation, and increased nutrient availability that include estimates of uncertainty for the southeastern US. Overall, we (1) demonstrated how three decades of research in southeastern US planted pine forests can be used to develop DA techniques that use multiple locations, multiple data streams, and multiple ecosystem experiment types to optimize parameters and (2) developed a tool for the development of future predictions of forest productivity for natural resource managers that leverage a rich dataset of integrated ecosystem observations across a region.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
Ice-free Arctic projections under the Paris Agreement
NASA Astrophysics Data System (ADS)
Sigmond, Michael; Fyfe, John C.; Swart, Neil C.
2018-05-01
Under the Paris Agreement, emissions scenarios are pursued that would stabilize the global mean temperature at 1.5-2.0 °C above pre-industrial levels, but current emission reduction policies are expected to limit warming by 2100 to approximately 3.0 °C. Whether such emissions scenarios would prevent a summer sea-ice-free Arctic is unknown. Here we employ stabilized warming simulations with an Earth System Model to obtain sea-ice projections under stabilized global warming, and correct biases in mean sea-ice coverage by constraining with observations. Although there is some sensitivity to details in the constraining method, the observationally constrained projections suggest that the benefits of going from 2.0 °C to 1.5 °C stabilized warming are substantial; an eightfold decrease in the frequency of ice-free conditions is expected, from once in every five to once in every forty years. Under 3.0 °C global mean warming, however, permanent summer ice-free conditions are likely, which emphasizes the need for nations to increase their commitments to the Paris Agreement.
Constrained variational calculus for higher order classical field theories
NASA Astrophysics Data System (ADS)
Campos, Cédric M.; de León, Manuel; Martín de Diego, David
2010-11-01
We develop an intrinsic geometrical setting for higher order constrained field theories. As a main tool we use an appropriate generalization of the classical Skinner-Rusk formalism. Some examples of applications are studied, in particular to the geometrical description of optimal control theory for partial differential equations.
NASA Astrophysics Data System (ADS)
Park, K.; Emmons, L. K.; Mak, J. E.
2007-12-01
Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year- simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.
NASA Astrophysics Data System (ADS)
Park, K.; Mak, J. E.; Emmons, L. K.
2008-12-01
Carbon monoxide is not only an important component for determining the atmospheric oxidizing capacity but also a key trace gas in the atmospheric chemistry of the Earth's background environment. The global CO cycle and its change are closely related to both the change of CO mixing ratio and the change of source strength. Previously, to estimate the global CO budget, most top-down estimation techniques have been applied the concentrations of CO solely. Since CO from certain sources has a unique isotopic signature, its isotopes provide additional information to constrain its sources. Thus, coupling the concentration and isotope fraction information enables to tightly constrain CO flux by its sources and allows better estimations on the global CO budget. MOZART4 (Model for Ozone And Related chemical Tracers), a 3-D global chemical transport model developed at NCAR, MPI for meteorology and NOAA/GFDL and is used to simulate the global CO concentration and its isotopic signature. Also, a tracer version of MOZART4 which tagged for C16O and C18O from each region and each source was developed to see their contributions to the atmosphere efficiently. Based on the nine-year-simulation results we analyze the influences of each source of CO to the isotopic signature and the concentration. Especially, the evaluations are focused on the oxygen isotope of CO (δ18O), which has not been extensively studied yet. To validate the model performance, CO concentrations and isotopic signatures measured from MPI, NIWA and our lab are compared to the modeled results. The MOZART4 reproduced observational data fairly well; especially in mid to high latitude northern hemisphere. Bayesian inversion techniques have been used to estimate the global CO budget with combining observed and modeled CO concentration. However, previous studies show significant differences in their estimations on CO source strengths. Because, in addition to the CO mixing ratio, isotopic signatures are independent tracers that contain the source information, jointly applying the isotope and the concentration information is expected to provide more precise optimization results in CO budget estimation. Our accumulated long-term CO isotope measurement data contribute to having more confidence of the inversions as well. Besides the benefit of adding isotope data on the inverse modeling, a trait of each isotope of CO (oxygen and carbon isotope) contains another advantageous use in the top-down estimation of the CO budget. δ18O and δ13C has a distinctive isotopic signature on a specific source; combustion sources such as a fossil fuel use show clearly different values from other natural sources in the δ18O signatures and the methane source can be easily separated by using δ13C information. Therefore, inversions of the two major sources of CO respond with different sensitivity for the different isotopes. To maximize the strengths of using isotope data in the inverse modeling analysis, various coupling schemes combining [CO], δ18O and δ13C have been investigated to enhance the credibility of the CO budget optimization.
NASA Astrophysics Data System (ADS)
Stavrakou, T.; Müller, J.-F.; Bauwens, M.; De Smedt, I.; Van Roozendael, M.; De Mazière, M.; Vigouroux, C.; Hendrick, F.; George, M.; Clerbaux, C.; Coheur, P.-F.; Guenther, A.
2015-04-01
The vertical columns of formaldehyde (HCHO) retrieved from two satellite instruments, the Global Ozone Monitoring Instrument-2 (GOME-2) on Metop-A and the Ozone Monitoring Instrument (OMI) on Aura, are used to constrain global emissions of HCHO precursors from open fires, vegetation and human activities in the year 2010. To this end, the emissions are varied and optimized using the adjoint model technique in the IMAGESv2 global CTM (chemistry-transport model) on a monthly basis and at the model resolution. Given the different local overpass times of GOME-2 (09:30 LT) and OMI (13:30 LT), the simulated diurnal cycle of HCHO columns is investigated and evaluated against ground-based optical measurements at 7 sites in Europe, China and Africa. The modelled diurnal cycle exhibits large variability, reflecting competition between photochemistry and emission variations, with noon or early afternoon maxima at remote locations (oceans) and in regions dominated by anthropogenic emissions, late afternoon or evening maxima over fire scenes, and midday minima in isoprene-rich regions. The agreement between simulated and ground-based columns is found to be generally better in summer (with a clear afternoon maximum at mid-latitude sites) than in winter, and the annually averaged ratio of afternoon to morning columns is slightly higher in the model (1.126) than in the ground-based measurements (1.043). The anthropogenic VOC (volatile organic compound) sources are found to be weakly constrained by the inversions on the global scale, mainly owing to their generally minor contribution to the HCHO columns, except over strongly polluted regions, like China. The OMI-based inversion yields total flux estimates over China close to the bottom-up inventory (24.6 vs. 25.5 in the a priori) with, however, pronounced increases in the Northeast China and reductions in the south. Lower fluxes are estimated based on GOME-2 HCHO columns (20.6 TgVOC), in particular over the Northeast, likely reflecting mismatches between the observed and the modelled diurnal cycle in this region. The resulting biogenic and pyrogenic flux estimates from both optimizations generally show a good degree of consistency. A reduction of the global annual biogenic emissions of isoprene is derived, by 9 and by 13% according to GOME-2 and OMI, respectively, compared to the a priori estimate of 363 Tg in 2010. The reduction is largest (up to 25-40%) in the Southeastern US, in accordance with earlier studies. The GOME-2 and OMI satellite columns suggest a global pyrogenic flux decrease by 36 and 33%, respectively, compared to the GFEDv3 inventory. This decrease is especially pronounced over tropical forests such as Amazonia and Thailand/Myanmar, and is supported by comparisons with IASI CO observations. In contrast to these flux reductions, the emissions due to harvest waste burning are strongly enhanced in the Northeastern China plain in June (by ca. 70% in June according to OMI) as well as over Indochina in March. Sensitivity inversions showed robustness of the inferred estimates, which were found to lie within 7% of the standard inversion results at the global scale.
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright
Dynamic optimization and its relation to classical and quantum constrained systems
NASA Astrophysics Data System (ADS)
Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo
2017-08-01
We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.
State transformations and Hamiltonian structures for optimal control in discrete systems
NASA Astrophysics Data System (ADS)
Sieniutycz, S.
2006-04-01
Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.
Adaptive Multi-Agent Systems for Constrained Optimization
NASA Technical Reports Server (NTRS)
Macready, William; Bieniawski, Stefan; Wolpert, David H.
2004-01-01
Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.
Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen
2013-01-01
One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.
Collective Violence in a Discontinuous World: Regional Realities and Global Fallacies.
ERIC Educational Resources Information Center
Vayrynen, Raimo
1986-01-01
Notes the conflict between increasing economic and political interdependence and the increasing fragmentation of the international power structure. Explains the regional conditions which constrain the global economic and military policies of the superpowers. (JDH)
NASA Astrophysics Data System (ADS)
Han, Xiaobao; Li, Huacong; Jia, Qiusheng
2017-12-01
For dynamic decoupling of polynomial linear parameter varying(PLPV) system, a robust dominance pre-compensator design method is given. The parameterized precompensator design problem is converted into an optimal problem constrained with parameterized linear matrix inequalities(PLMI) by using the conception of parameterized Lyapunov function(PLF). To solve the PLMI constrained optimal problem, the precompensator design problem is reduced into a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a new constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator is achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation on a turbofan engine PLPV model.
Trade-offs and efficiencies in optimal budget-constrained multispecies corridor networks
Bistra Dilkina; Rachel Houtman; Carla P. Gomes; Claire A. Montgomery; Kevin S. McKelvey; Katherine Kendall; Tabitha A. Graves; Richard Bernstein; Michael K. Schwartz
2016-01-01
Conservation biologists recognize that a system of isolated protected areas will be necessary but insufficient to meet biodiversity objectives. Current approaches to connecting core conservation areas through corridors consider optimal corridor placement based on a single optimization goal: commonly, maximizing the movement for a target species across a...
NASA Technical Reports Server (NTRS)
Downie, John D.
1995-01-01
Images with signal-dependent noise present challenges beyond those of images with additive white or colored signal-independent noise in terms of designing the optimal 4-f correlation filter that maximizes correlation-peak signal-to-noise ratio, or combinations of correlation-peak metrics. Determining the proper design becomes more difficult when the filter is to be implemented on a constrained-modulation spatial light modulator device. The design issues involved for updatable optical filters for images with signal-dependent film-grain noise and speckle noise are examined. It is shown that although design of the optimal linear filter in the Fourier domain is impossible for images with signal-dependent noise, proper nonlinear preprocessing of the images allows the application of previously developed design rules for optimal filters to be implemented on constrained-modulation devices. Thus the nonlinear preprocessing becomes necessary for correlation in optical systems with current spatial light modulator technology. These results are illustrated with computer simulations of images with signal-dependent noise correlated with binary-phase-only filters and ternary-phase-amplitude filters.
NASA Astrophysics Data System (ADS)
Shao, H.; Huang, Y.; Kolditz, O.
2015-12-01
Multiphase flow problems are numerically difficult to solve, as it often contains nonlinear Phase transition phenomena A conventional technique is to introduce the complementarity constraints where fluid properties such as liquid saturations are confined within a physically reasonable range. Based on such constraints, the mathematical model can be reformulated into a system of nonlinear partial differential equations coupled with variational inequalities. They can be then numerically handled by optimization algorithms. In this work, two different approaches utilizing the complementarity constraints based on persistent primary variables formulation[4] are implemented and investigated. The first approach proposed by Marchand et.al[1] is using "local complementary constraints", i.e. coupling the constraints with the local constitutive equations. The second approach[2],[3] , namely the "global complementary constrains", applies the constraints globally with the mass conservation equation. We will discuss how these two approaches are applied to solve non-isothermal componential multiphase flow problem with the phase change phenomenon. Several benchmarks will be presented for investigating the overall numerical performance of different approaches. The advantages and disadvantages of different models will also be concluded. References[1] E.Marchand, T.Mueller and P.Knabner. Fully coupled generalized hybrid-mixed finite element approximation of two-phase two-component flow in porous media. Part I: formulation and properties of the mathematical model, Computational Geosciences 17(2): 431-442, (2013). [2] A. Lauser, C. Hager, R. Helmig, B. Wohlmuth. A new approach for phase transitions in miscible multi-phase flow in porous media. Water Resour., 34,(2011), 957-966. [3] J. Jaffré, and A. Sboui. Henry's Law and Gas Phase Disappearance. Transp. Porous Media. 82, (2010), 521-526. [4] A. Bourgeat, M. Jurak and F. Smaï. Two-phase partially miscible flow and transport modeling in porous media : application to gas migration in a nuclear waste repository, Comp.Geosciences. (2009), Volume 13, Number 1, 29-42.
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks
Chen, Jianhui; Liu, Ji; Ye, Jieping
2013-01-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms. PMID:24077658
Learning Incoherent Sparse and Low-Rank Patterns from Multiple Tasks.
Chen, Jianhui; Liu, Ji; Ye, Jieping
2012-02-01
We consider the problem of learning incoherent sparse and low-rank patterns from multiple tasks. Our approach is based on a linear multi-task learning formulation, in which the sparse and low-rank patterns are induced by a cardinality regularization term and a low-rank constraint, respectively. This formulation is non-convex; we convert it into its convex surrogate, which can be routinely solved via semidefinite programming for small-size problems. We propose to employ the general projected gradient scheme to efficiently solve such a convex surrogate; however, in the optimization formulation, the objective function is non-differentiable and the feasible domain is non-trivial. We present the procedures for computing the projected gradient and ensuring the global convergence of the projected gradient scheme. The computation of projected gradient involves a constrained optimization problem; we show that the optimal solution to such a problem can be obtained via solving an unconstrained optimization subproblem and an Euclidean projection subproblem. We also present two projected gradient algorithms and analyze their rates of convergence in details. In addition, we illustrate the use of the presented projected gradient algorithms for the proposed multi-task learning formulation using the least squares loss. Experimental results on a collection of real-world data sets demonstrate the effectiveness of the proposed multi-task learning formulation and the efficiency of the proposed projected gradient algorithms.
Face verification with balanced thresholds.
Yan, Shuicheng; Xu, Dong; Tang, Xiaoou
2007-01-01
The process of face verification is guided by a pre-learned global threshold, which, however, is often inconsistent with class-specific optimal thresholds. It is, hence, beneficial to pursue a balance of the class-specific thresholds in the model-learning stage. In this paper, we present a new dimensionality reduction algorithm tailored to the verification task that ensures threshold balance. This is achieved by the following aspects. First, feasibility is guaranteed by employing an affine transformation matrix, instead of the conventional projection matrix, for dimensionality reduction, and, hence, we call the proposed algorithm threshold balanced transformation (TBT). Then, the affine transformation matrix, constrained as the product of an orthogonal matrix and a diagonal matrix, is optimized to improve the threshold balance and classification capability in an iterative manner. Unlike most algorithms for face verification which are directly transplanted from face identification literature, TBT is specifically designed for face verification and clarifies the intrinsic distinction between these two tasks. Experiments on three benchmark face databases demonstrate that TBT significantly outperforms the state-of-the-art subspace techniques for face verification.
Optimal and fast E/B separation with a dual messenger field
NASA Astrophysics Data System (ADS)
Kodi Ramanah, Doogesh; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-05-01
We adapt our recently proposed dual messenger algorithm for spin field reconstruction and showcase its efficiency and effectiveness in Wiener filtering polarized cosmic microwave background (CMB) maps. Unlike conventional preconditioned conjugate gradient (PCG) solvers, our preconditioner-free technique can deal with high-resolution joint temperature and polarization maps with inhomogeneous noise distributions and arbitrary mask geometries with relative ease. Various convergence diagnostics illustrate the high quality of the dual messenger reconstruction. In contrast, the PCG implementation fails to converge to a reasonable solution for the specific problem considered. The implementation of the dual messenger method is straightforward and guarantees numerical stability and convergence. We show how the algorithm can be modified to generate fluctuation maps, which, combined with the Wiener filter solution, yield unbiased constrained signal realizations, consistent with observed data. This algorithm presents a pathway to exact global analyses of high-resolution and high-sensitivity CMB data for a statistically optimal separation of E and B modes. It is therefore relevant for current and next-generation CMB experiments, in the quest for the elusive primordial B-mode signal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stavrakou, T.; Muller, J. F.; Bauwens, M.
2015-10-26
The vertical columns of formaldehyde (HCHO) retrieved from two satellite instruments, the Global Ozone Monitoring Instrument-2 (GOME-2) on Metop-A and the Ozone Monitoring Instrument (OMI) on Aura, are used to constrain global emissions of HCHO precursors from open fires, vegetation and human activities in the year 2010. To this end, the emissions are varied and optimized using the ad-joint model technique in the IMAGESv2 global CTM (chem-ical transport model) on a monthly basis and at the model res-olution. Given the different local overpass times of GOME- 2 (09:30 LT) and OMI (13:30 LT), the simulated diurnal cy-cle of HCHO columnsmore » is investigated and evaluated against ground-based optical measurements at seven sites in Europe, China and Africa. The modeled diurnal cycle exhibits large variability, reflecting competition between photochemistry and emission variations, with noon or early afternoon max-ima at remote locations (oceans) and in regions dominated by anthropogenic emissions, late afternoon or evening max-ima over fire scenes, and midday minima in isoprene-rich re-gions. The agreement between simulated and ground-based columns is generally better in summer (with a clear after-noon maximum at mid-latitude sites) than in winter, and the annually averaged ratio of afternoon to morning columns is slightly higher in the model (1.126) than in the ground-based measurements (1.043).The anthropogenic VOC (volatile organic compound) sources are found to be weakly constrained by the inversions on the global scale, mainly owing to their generally minor contribution to the HCHO columns, except over strongly pol-luted regions, like China. The OMI-based inversion yields total flux estimates over China close to the bottom-up inven-tory (24.6 vs. 25.5 TgVOC yr -1 in the a priori) with, how-ever, pronounced increases in the northeast of China and re-ductions in the south. Lower fluxes are estimated based on GOME-2 HCHO columns (20.6 TgVOC yr -1), in particular over the northeast, likely reflecting mismatches between the observed and the modeled diurnal cycle in this region.« less
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
PAPR-Constrained Pareto-Optimal Waveform Design for OFDM-STAP Radar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata
We propose a peak-to-average power ratio (PAPR) constrained Pareto-optimal waveform design approach for an orthogonal frequency division multiplexing (OFDM) radar signal to detect a target using the space-time adaptive processing (STAP) technique. The use of an OFDM signal does not only increase the frequency diversity of our system, but also enables us to adaptively design the OFDM coefficients in order to further improve the system performance. First, we develop a parametric OFDM-STAP measurement model by considering the effects of signaldependent clutter and colored noise. Then, we observe that the resulting STAP-performance can be improved by maximizing the output signal-to-interference-plus-noise ratiomore » (SINR) with respect to the signal parameters. However, in practical scenarios, the computation of output SINR depends on the estimated values of the spatial and temporal frequencies and target scattering responses. Therefore, we formulate a PAPR-constrained multi-objective optimization (MOO) problem to design the OFDM spectral parameters by simultaneously optimizing four objective functions: maximizing the output SINR, minimizing two separate Cramer-Rao bounds (CRBs) on the normalized spatial and temporal frequencies, and minimizing the trace of CRB matrix on the target scattering coefficients estimations. We present several numerical examples to demonstrate the achieved performance improvement due to the adaptive waveform design.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
An indirect method for numerical optimization using the Kreisselmeir-Steinhauser function
NASA Technical Reports Server (NTRS)
Wrenn, Gregory A.
1989-01-01
A technique is described for converting a constrained optimization problem into an unconstrained problem. The technique transforms one of more objective functions into reduced objective functions, which are analogous to goal constraints used in the goal programming method. These reduced objective functions are appended to the set of constraints and an envelope of the entire function set is computed using the Kreisselmeir-Steinhauser function. This envelope function is then searched for an unconstrained minimum. The technique may be categorized as a SUMT algorithm. Advantages of this approach are the use of unconstrained optimization methods to find a constrained minimum without the draw down factor typical of penalty function methods, and that the technique may be started from the feasible or infeasible design space. In multiobjective applications, the approach has the advantage of locating a compromise minimum design without the need to optimize for each individual objective function separately.
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.; Nguyen, Duc T.
2008-01-01
A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.
Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables
DOE Office of Scientific and Technical Information (OSTI.GOV)
DallAnese, Emiliano; Baker, Kyri; Summers, Tyler
This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less
Optimal lifting ascent trajectories for the space shuttle
NASA Technical Reports Server (NTRS)
Rau, T. R.; Elliott, J. R.
1972-01-01
The performance gains which are possible through the use of optimal trajectories for a particular space shuttle configuration are discussed. The spacecraft configurations and aerodynamic characteristics are described. Shuttle mission payload capability is examined with respect to the optimal orbit inclination for unconstrained, constrained, and nonlifting conditions. The effects of velocity loss and heating rate on the optimal ascent trajectory are investigated.
Artifact reduction in short-scan CBCT by use of optimization-based reconstruction
Zhang, Zheng; Han, Xiao; Pearson, Erik; Pelizzari, Charles; Sidky, Emil Y; Pan, Xiaochuan
2017-01-01
Increasing interest in optimization-based reconstruction in research on, and applications of, cone-beam computed tomography (CBCT) exists because it has been shown to have to potential to reduce artifacts observed in reconstructions obtained with the Feldkamp–Davis–Kress (FDK) algorithm (or its variants), which is used extensively for image reconstruction in current CBCT applications. In this work, we carried out a study on optimization-based reconstruction for possible reduction of artifacts in FDK reconstruction specifically from short-scan CBCT data. The investigation includes a set of optimization programs such as the image-total-variation (TV)-constrained data-divergency minimization, data-weighting matrices such as the Parker weighting matrix, and objects of practical interest for demonstrating and assessing the degree of artifact reduction. Results of investigative work reveal that appropriately designed optimization-based reconstruction, including the image-TV-constrained reconstruction, can reduce significant artifacts observed in FDK reconstruction in CBCT with a short-scan configuration. PMID:27046218
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Lee, Charles H.
2012-01-01
We developed framework and the mathematical formulation for optimizing communication network using mixed integer programming. The design yields a system that is much smaller, in search space size, when compared to the earlier approach. Our constrained network optimization takes into account the dynamics of link performance within the network along with mission and operation requirements. A unique penalty function is introduced to transform the mixed integer programming into the more manageable problem of searching in a continuous space. The constrained optimization problem was proposed to solve in two stages: first using the heuristic Particle Swarming Optimization algorithm to get a good initial starting point, and then feeding the result into the Sequential Quadratic Programming algorithm to achieve the final optimal schedule. We demonstrate the above planning and scheduling methodology with a scenario of 20 spacecraft and 3 ground stations of a Deep Space Network site. Our approach and framework have been simple and flexible so that problems with larger number of constraints and network can be easily adapted and solved.
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
The scope of the LeChatelier Principle
NASA Astrophysics Data System (ADS)
George M., Lady; Quirk, James P.
2007-07-01
LeChatelier [Comptes Rendus 99 (1884) 786; Ann. Mines 13 (2) (1888) 157] showed that a physical system's “adjustment” to a disturbance to its equilibrium tended to be smaller as constraints were added to the adjustment process. Samuelson [Foundations of Economic Analysis, Harvard University Press, Cambridge, 1947] applied this result to economics in the context of the comparative statics of the actions of individual agents characterized as the solutions to optimization problems; and later (1960), extended the application of the Principle to a stable, multi-market equilibrium and the case of all commodities gross substitutes [e.g., L. Metzler, Stability of multiple markets: the hicks conditions. Econometrica 13 (1945) 277-292]. Refinements and alternative routes of derivation have appeared in the literature since then, e.g., Silberberg [The LeChatelier Principle as a corollary to a generalized envelope theorem, J. Econ. Theory 3 (1971) 146-155; A revision of comparative statics methodology in economics, or, how to do comparative statics on the back of an envelope, J. Econ. Theory 7 (1974) 159-172], Milgrom and Roberts [The LeChatelier Principle, Am. Econ. Rev. 86 (1996) 173-179], W. Suen, E. Silberberg, P. Tseng [The LeChatelier Principle: the long and the short of it, Econ. Theory 16 (2000) 471-476], and Chavas [A global analysis of constrained behavior: the LeChatelier Principle ‘in the large’, South. Econ. J. 72 (3) (2006) 627-644]. In this paper, we expand the scope of the Principle in various ways keyed to Samuelson's proposed means of testing comparative statics results (optimization, stability, and qualitative analysis). In the optimization framework, we show that the converse LeChatelier Principle also can be found in constrained optimization problems and for not initially “conjugate” sensitivities. We then show how the Principle and its converse can be found through the qualitative analysis of any linear system. In these terms, the Principle and its converse also may be found in the same system at the same time with respect to the imposition of the same constraint. Based upon this, we expand the cases for which the Principle can be found based upon the stability hypothesis.
Program Aids Analysis And Optimization Of Design
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1994-01-01
NETS/ PROSSS (NETS Coupled With Programming System for Structural Synthesis) computer program developed to provide system for combining NETS (MSC-21588), neural-network application program and CONMIN (Constrained Function Minimization, ARC-10836), optimization program. Enables user to reach nearly optimal design. Design then used as starting point in normal optimization process, possibly enabling user to converge to optimal solution in significantly fewer iterations. NEWT/PROSSS written in C language and FORTRAN 77.
NASA Astrophysics Data System (ADS)
Stavrakou, T.; Müller, J.-F.; Bauwens, M.; De Smedt, I.; Van Roozendael, M.; De Mazière, M.; Vigouroux, C.; Hendrick, F.; George, M.; Clerbaux, C.; Coheur, P.-F.; Guenther, A.
2015-10-01
The vertical columns of formaldehyde (HCHO) retrieved from two satellite instruments, the Global Ozone Monitoring Instrument-2 (GOME-2) on Metop-A and the Ozone Monitoring Instrument (OMI) on Aura, are used to constrain global emissions of HCHO precursors from open fires, vegetation and human activities in the year 2010. To this end, the emissions are varied and optimized using the adjoint model technique in the IMAGESv2 global CTM (chemical transport model) on a monthly basis and at the model resolution. Given the different local overpass times of GOME-2 (09:30 LT) and OMI (13:30 LT), the simulated diurnal cycle of HCHO columns is investigated and evaluated against ground-based optical measurements at seven sites in Europe, China and Africa. The modeled diurnal cycle exhibits large variability, reflecting competition between photochemistry and emission variations, with noon or early afternoon maxima at remote locations (oceans) and in regions dominated by anthropogenic emissions, late afternoon or evening maxima over fire scenes, and midday minima in isoprene-rich regions. The agreement between simulated and ground-based columns is generally better in summer (with a clear afternoon maximum at mid-latitude sites) than in winter, and the annually averaged ratio of afternoon to morning columns is slightly higher in the model (1.126) than in the ground-based measurements (1.043). The anthropogenic VOC (volatile organic compound) sources are found to be weakly constrained by the inversions on the global scale, mainly owing to their generally minor contribution to the HCHO columns, except over strongly polluted regions, like China. The OMI-based inversion yields total flux estimates over China close to the bottom-up inventory (24.6 vs. 25.5 TgVOC yr-1 in the a priori) with, however, pronounced increases in the northeast of China and reductions in the south. Lower fluxes are estimated based on GOME-2 HCHO columns (20.6 TgVOC yr-1), in particular over the northeast, likely reflecting mismatches between the observed and the modeled diurnal cycle in this region. The resulting biogenic and pyrogenic flux estimates from both optimizations generally show a good degree of consistency. A reduction of the global annual biogenic emissions of isoprene is derived, of 9 and 13 % according to GOME-2 and OMI, respectively, compared to the a priori estimate of 363 Tg in 2010. The reduction is largest (up to 25-40 %) in the Southeastern US, in accordance with earlier studies. The GOME-2 and OMI satellite columns suggest a global pyrogenic flux decrease by 36 and 33 %, respectively, compared to the GFEDv3 (Global Fire Emissions Database) inventory. This decrease is especially pronounced over tropical forests, such as in Amazonia, Thailand and Myanmar, and is supported by comparisons with CO observations from IASI (Infrared Atmospheric Sounding Interferometer). In contrast to these flux reductions, the emissions due to harvest waste burning are strongly enhanced over the northeastern China plain in June (by ca. 70 % in June according to OMI) as well as over Indochina in March. Sensitivity inversions showed robustness of the inferred estimates, which were found to lie within 7 % of the standard inversion results at the global scale.
Optimal vibration control of a rotating plate with self-sensing active constrained layer damping
NASA Astrophysics Data System (ADS)
Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng
2012-04-01
This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.
On the ability of a global atmospheric inversion to constrain variations of CO2 fluxes over Amazonia
NASA Astrophysics Data System (ADS)
Molina, L.; Broquet, G.; Imbach, P.; Chevallier, F.; Poulter, B.; Bonal, D.; Burban, B.; Ramonet, M.; Gatti, L. V.; Wofsy, S. C.; Munger, J. W.; Dlugokencky, E.; Ciais, P.
2015-07-01
The exchanges of carbon, water and energy between the atmosphere and the Amazon basin have global implications for the current and future climate. Here, the global atmospheric inversion system of the Monitoring of Atmospheric Composition and Climate (MACC) service is used to study the seasonal and interannual variations of biogenic CO2 fluxes in Amazonia during the period 2002-2010. The system assimilated surface measurements of atmospheric CO2 mole fractions made at more than 100 sites over the globe into an atmospheric transport model. The present study adds measurements from four surface stations located in tropical South America, a region poorly covered by CO2 observations. The estimates of net ecosystem exchange (NEE) optimized by the inversion are compared to an independent estimate of NEE upscaled from eddy-covariance flux measurements in Amazonia. They are also qualitatively evaluated against reports on the seasonal and interannual variations of the land sink in South America from the scientific literature. We attempt at assessing the impact on NEE of the strong droughts in 2005 and 2010 (due to severe and longer-than-usual dry seasons) and the extreme rainfall conditions registered in 2009. The spatial variations of the seasonal and interannual variability of optimized NEE are also investigated. While the inversion supports the assumption of strong spatial heterogeneity of these variations, the results reveal critical limitations of the coarse-resolution transport model, the surface observation network in South America during the recent years and the present knowledge of modelling uncertainties in South America that prevent our inversion from capturing the seasonal patterns of fluxes across Amazonia. However, some patterns from the inversion seem consistent with the anomaly of moisture conditions in 2009.
Hacker, David E; Hoinka, Jan; Iqbal, Emil S; Przytycka, Teresa M; Hartman, Matthew C T
2017-03-17
Highly constrained peptides such as the knotted peptide natural products are promising medicinal agents because of their impressive biostability and potent activity. Yet, libraries of highly constrained peptides are challenging to prepare. Here, we present a method which utilizes two robust, orthogonal chemical steps to create highly constrained bicyclic peptide libraries. This technology was optimized to be compatible with in vitro selections by mRNA display. We performed side-by-side monocyclic and bicyclic selections against a model protein (streptavidin). Both selections resulted in peptides with mid-nanomolar affinity, and the bicyclic selection yielded a peptide with remarkable protease resistance.
NASA Astrophysics Data System (ADS)
Lee, Dae Young
The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue, for the ADCS operations, controllers based on Model Predictive Control that are effective for constraint handling were developed and implemented. All the suggested design and operation methodologies are applied to a mission "CADRE", which is space weather mission scheduled for operation in 2016. This application demonstrates the usefulness and capability of the methodology to enhance CADRE's capabilities, and its ability to be applied to a variety of missions.
Using SpF to Achieve Petascale for Legacy Pseudospectral Applications
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Jiang, Weiyuan
2014-01-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.
Mechanisms Controlling Global Mean Sea Surface Temperature Determined From a State Estimate
NASA Astrophysics Data System (ADS)
Ponte, R. M.; Piecuch, C. G.
2018-04-01
Global mean sea surface temperature (T¯) is a variable of primary interest in studies of climate variability and change. The temporal evolution of T¯ can be influenced by surface heat fluxes (F¯) and by diffusion (D¯) and advection (A¯) processes internal to the ocean, but quantifying the contribution of these different factors from data alone is prone to substantial uncertainties. Here we derive a closed T¯ budget for the period 1993-2015 based on a global ocean state estimate, which is an exact solution of a general circulation model constrained to most extant ocean observations through advanced optimization methods. The estimated average temperature of the top (10-m thick) level in the model, taken to represent T¯, shows relatively small variability at most time scales compared to F¯, D¯, or A¯, reflecting the tendency for largely balancing effects from all the latter terms. The seasonal cycle in T¯ is mostly determined by small imbalances between F¯ and D¯, with negligible contributions from A¯. While D¯ seems to simply damp F¯ at the annual period, a different dynamical role for D¯ at semiannual period is suggested by it being larger than F¯. At periods longer than annual, A¯ contributes importantly to T¯ variability, pointing to the direct influence of the variable ocean circulation on T¯ and mean surface climate.
Constraining CO emission estimates using atmospheric observations
NASA Astrophysics Data System (ADS)
Hooghiemstra, P. B.
2012-06-01
We apply a four-dimensional variational (4D-Var) data assimilation system to optimize carbon monoxide (CO) emissions and to reduce the uncertainty of emission estimates from individual sources using the chemistry transport model TM5. In the first study only a limited amount of surface network observations from the National Oceanic and Atmospheric Administration Earth System Research Laboratory (NOAA/ESRL) Global Monitoring Division (GMD) is used to test the 4D-Var system. Uncertainty reduction up to 60% in yearly emissions is observed over well-constrained regions and the inferred emissions compare well with recent studies for 2004. However, since the observations only constrain total CO emissions, the 4D-Var system has difficulties separating anthropogenic and biogenic sources in particular. The inferred emissions are validated with NOAA aircraft data over North America and the agreement is significantly improved from the prior to posterior simulation. Validation with the Measurements Of Pollution In The Troposphere (MOPITT) instrument shows a slight improved agreement over the well-constrained Northern Hemisphere and in the tropics (except for the African continent). However, the model simulation with posterior emissions underestimates MOPITT CO total columns on the remote Southern Hemisphere (SH) by about 10%. This is caused by a reduction in SH CO sources mainly due to surface stations on the high southern latitudes. In the second study, we compare two global inversions to estimate carbon monoxide (CO) emissions for 2004. Either surface flask observations from NOAA or CO total columns from the MOPITT instrument are assimilated in a 4D-Var framework. In the Southern Hemisphere (SH) three important findings are reported. First, due to their different vertical sensitivity, the stations-only inversion increases SH biomass burning emissions by 108 Tg CO/yr more than the MOPITT-only inversion. Conversely, the MOPITT-only inversion results in SH natural emissions (mainly CO from oxidation of NMVOCs) that are 185 Tg CO/yr higher compared to the stations-only inversion. Second, MOPITT-only derived biomass burning emissions are reduced with respect to the prior which is in contrast to previous (inverse) modeling studies. Finally, MOPITT derived total emissions are significantly higher for South America and Africa compared to the stations-only inversion. This is likely due to a positive bias in the MOPITT V4 product. This bias is also apparent from validation with surface stations and ground-truth FTIR columns. In the final study we present the first inverse modeling study to estimate CO emissions constrained by both surface (NOAA) and satellite (MOPITT) observations using a bias correction scheme. This approach leads to the identification of a positive bias of maximum 5 ppb in MOPITT column-averaged CO mixing ratios in the remote Southern Hemisphere (SH). The 4D-Var system is used to estimate CO emissions over South America in the period 2006-2010 and to analyze the interannual variability (IAV) of these emissions. We infer robust, high spatial resolution CO emission estimates that show slightly smaller IAV due to fires compared to the Global Fire Emissions Database (GFED3) prior emissions. Moreover, CO emissions probably associated with pre-harvest burning of sugar cane plantations are underestimated in current inventories by 50-100%.
Optimal Tikhonov Regularization in Finite-Frequency Tomography
NASA Astrophysics Data System (ADS)
Fang, Y.; Yao, Z.; Zhou, Y.
2017-12-01
The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.
Constraining the braneworld with gravitational wave observations.
McWilliams, Sean T
2010-04-09
Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, l, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining l via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain l at the approximately 1 microm level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of l < or = 5 microm.
Constraining the Braneworld with Gravitational Wave Observations
NASA Technical Reports Server (NTRS)
McWilliams, Sean T.
2011-01-01
Some braneworld models may have observable consequences that, if detected, would validate a requisite element of string theory. In the infinite Randall-Sundrum model (RS2), the AdS radius of curvature, L, of the extra dimension supports a single bound state of the massless graviton on the brane, thereby reproducing Newtonian gravity in the weak-field limit. However, using the AdS/CFT correspondence, it has been suggested that one possible consequence of RS2 is an enormous increase in Hawking radiation emitted by black holes. We utilize this possibility to derive two novel methods for constraining L via gravitational wave measurements. We show that the EMRI event rate detected by LISA can constrain L at the approximately 1 micron level for optimal cases, while the observation of a single galactic black hole binary with LISA results in an optimal constraint of L less than or equal to 5 microns.
A hierarchical transition state search algorithm
NASA Astrophysics Data System (ADS)
del Campo, Jorge M.; Köster, Andreas M.
2008-07-01
A hierarchical transition state search algorithm is developed and its implementation in the density functional theory program deMon2k is described. This search algorithm combines the double ended saddle interpolation method with local uphill trust region optimization. A new formalism for the incorporation of the distance constrain in the saddle interpolation method is derived. The similarities between the constrained optimizations in the local trust region method and the saddle interpolation are highlighted. The saddle interpolation and local uphill trust region optimizations are validated on a test set of 28 representative reactions. The hierarchical transition state search algorithm is applied to an intramolecular Diels-Alder reaction with several internal rotors, which makes automatic transition state search rather challenging. The obtained reaction mechanism is discussed in the context of the experimentally observed product distribution.
A Climate Data Record (CDR) for the global terrestrial water budget: 1984–2010
Zhang, Yu; Pan, Ming; Sheffield, Justin; ...
2018-01-12
Closing the terrestrial water budget is necessary to provide consistent estimates of budget components for understanding water resources and changes over time. Given the lack of in situ observations of budget components at anything but local scale, merging information from multiple data sources (e.g., in situ observation, satellite remote sensing, land surface model, and reanalysis) through data assimilation techniques that optimize the estimation of fluxes is a promising approach. Conditioned on the current limited data availability, a systematic method is developed to optimally combine multiple available data sources for precipitation ( P), evapotranspiration (ET), runoff ( R), and the totalmore » water storage change (TWSC) at 0.5° spatial resolution globally and to obtain water budget closure (i.e., to enforce P-ET- R-TWSC = 0) through a constrained Kalman filter (CKF) data assimilation technique under the assumption that the deviation from the ensemble mean of all data sources for the same budget variable is used as a proxy of the uncertainty in individual water budget variables. The resulting long-term (1984–2010), monthly 0.5° resolution global terrestrial water cycle Climate Data Record (CDR) data set is developed under the auspices of the National Aeronautics and Space Administration (NASA) Earth System Data Records (ESDRs) program. This data set serves to bridge the gap between sparsely gauged regions and the regions with sufficient in situ observations in investigating the temporal and spatial variability in the terrestrial hydrology at multiple scales. The CDR created in this study is validated against in situ measurements like river discharge from the Global Runoff Data Centre (GRDC) and the United States Geological Survey (USGS), and ET from FLUXNET. The data set is shown to be reliable and can serve the scientific community in understanding historical climate variability in water cycle fluxes and stores, benchmarking the current climate, and validating models.« less
A Climate Data Record (CDR) for the global terrestrial water budget: 1984–2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yu; Pan, Ming; Sheffield, Justin
Closing the terrestrial water budget is necessary to provide consistent estimates of budget components for understanding water resources and changes over time. Given the lack of in situ observations of budget components at anything but local scale, merging information from multiple data sources (e.g., in situ observation, satellite remote sensing, land surface model, and reanalysis) through data assimilation techniques that optimize the estimation of fluxes is a promising approach. Conditioned on the current limited data availability, a systematic method is developed to optimally combine multiple available data sources for precipitation ( P), evapotranspiration (ET), runoff ( R), and the totalmore » water storage change (TWSC) at 0.5° spatial resolution globally and to obtain water budget closure (i.e., to enforce P-ET- R-TWSC = 0) through a constrained Kalman filter (CKF) data assimilation technique under the assumption that the deviation from the ensemble mean of all data sources for the same budget variable is used as a proxy of the uncertainty in individual water budget variables. The resulting long-term (1984–2010), monthly 0.5° resolution global terrestrial water cycle Climate Data Record (CDR) data set is developed under the auspices of the National Aeronautics and Space Administration (NASA) Earth System Data Records (ESDRs) program. This data set serves to bridge the gap between sparsely gauged regions and the regions with sufficient in situ observations in investigating the temporal and spatial variability in the terrestrial hydrology at multiple scales. The CDR created in this study is validated against in situ measurements like river discharge from the Global Runoff Data Centre (GRDC) and the United States Geological Survey (USGS), and ET from FLUXNET. The data set is shown to be reliable and can serve the scientific community in understanding historical climate variability in water cycle fluxes and stores, benchmarking the current climate, and validating models.« less
A Climate Data Record (CDR) for the global terrestrial water budget: 1984-2010
NASA Astrophysics Data System (ADS)
Zhang, Yu; Pan, Ming; Sheffield, Justin; Siemann, Amanda L.; Fisher, Colby K.; Liang, Miaoling; Beck, Hylke E.; Wanders, Niko; MacCracken, Rosalyn F.; Houser, Paul R.; Zhou, Tian; Lettenmaier, Dennis P.; Pinker, Rachel T.; Bytheway, Janice; Kummerow, Christian D.; Wood, Eric F.
2018-01-01
Closing the terrestrial water budget is necessary to provide consistent estimates of budget components for understanding water resources and changes over time. Given the lack of in situ observations of budget components at anything but local scale, merging information from multiple data sources (e.g., in situ observation, satellite remote sensing, land surface model, and reanalysis) through data assimilation techniques that optimize the estimation of fluxes is a promising approach. Conditioned on the current limited data availability, a systematic method is developed to optimally combine multiple available data sources for precipitation (P), evapotranspiration (ET), runoff (R), and the total water storage change (TWSC) at 0.5° spatial resolution globally and to obtain water budget closure (i.e., to enforce P - ET - R - TWSC = 0) through a constrained Kalman filter (CKF) data assimilation technique under the assumption that the deviation from the ensemble mean of all data sources for the same budget variable is used as a proxy of the uncertainty in individual water budget variables. The resulting long-term (1984-2010), monthly 0.5° resolution global terrestrial water cycle Climate Data Record (CDR) data set is developed under the auspices of the National Aeronautics and Space Administration (NASA) Earth System Data Records (ESDRs) program. This data set serves to bridge the gap between sparsely gauged regions and the regions with sufficient in situ observations in investigating the temporal and spatial variability in the terrestrial hydrology at multiple scales. The CDR created in this study is validated against in situ measurements like river discharge from the Global Runoff Data Centre (GRDC) and the United States Geological Survey (USGS), and ET from FLUXNET. The data set is shown to be reliable and can serve the scientific community in understanding historical climate variability in water cycle fluxes and stores, benchmarking the current climate, and validating models.
Fast alternating projection methods for constrained tomographic reconstruction
Liu, Li; Han, Yongxin
2017-01-01
The alternating projection algorithms are easy to implement and effective for large-scale complex optimization problems, such as constrained reconstruction of X-ray computed tomography (CT). A typical method is to use projection onto convex sets (POCS) for data fidelity, nonnegative constraints combined with total variation (TV) minimization (so called TV-POCS) for sparse-view CT reconstruction. However, this type of method relies on empirically selected parameters for satisfactory reconstruction and is generally slow and lack of convergence analysis. In this work, we use a convex feasibility set approach to address the problems associated with TV-POCS and propose a framework using full sequential alternating projections or POCS (FS-POCS) to find the solution in the intersection of convex constraints of bounded TV function, bounded data fidelity error and non-negativity. The rationale behind FS-POCS is that the mathematically optimal solution of the constrained objective function may not be the physically optimal solution. The breakdown of constrained reconstruction into an intersection of several feasible sets can lead to faster convergence and better quantification of reconstruction parameters in a physical meaningful way than that in an empirical way of trial-and-error. In addition, for large-scale optimization problems, first order methods are usually used. Not only is the condition for convergence of gradient-based methods derived, but also a primal-dual hybrid gradient (PDHG) method is used for fast convergence of bounded TV. The newly proposed FS-POCS is evaluated and compared with TV-POCS and another convex feasibility projection method (CPTV) using both digital phantom and pseudo-real CT data to show its superior performance on reconstruction speed, image quality and quantification. PMID:28253298
nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties in the CTEQ framework
Kovarik, K.; Kusina, A.; Jezo, T.; ...
2016-04-28
We present the new nCTEQ15 set of nuclear parton distribution functions with uncertainties. This fit extends the CTEQ proton PDFs to include the nuclear dependence using data on nuclei all the way up to 208Pb. The uncertainties are determined using the Hessian method with an optimal rescaling of the eigenvectors to accurately represent the uncertainties for the chosen tolerance criteria. In addition to the Deep Inelastic Scattering (DIS) and Drell-Yan (DY) processes, we also include inclusive pion production data from RHIC to help constrain the nuclear gluon PDF. Here, we investigate the correlation of the data sets with specific nPDFmore » flavor components, and asses the impact of individual experiments. We also provide comparisons of the nCTEQ15 set with recent fits from other groups.« less
Pareto joint inversion of 2D magnetotelluric and gravity data
NASA Astrophysics Data System (ADS)
Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek
2015-04-01
In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where interesting density distributions are relatively shallow and resistivity changes are related to deeper parts. This kind of conditions are well suited for joint inversion of MT and gravity data. In the next stage of the solution development of further code optimization and extensive tests for real data will be realized. Presented work was supported by Polish National Centre for Research and Development under the contract number POIG.01.04.00-12-279/13
Cihan, Abdullah; Birkholzer, Jens; Bianchi, Marco
2014-12-31
Large-scale pressure increases resulting from carbon dioxide (CO 2) injection in the subsurface can potentially impact caprock integrity, induce reactivation of critically stressed faults, and drive CO 2 or brine through conductive features into shallow groundwater. Pressure management involving the extraction of native fluids from storage formations can be used to minimize pressure increases while maximizing CO2 storage. However, brine extraction requires pumping, transportation, possibly treatment, and disposal of substantial volumes of extracted brackish or saline water, all of which can be technically challenging and expensive. This paper describes a constrained differential evolution (CDE) algorithm for optimal well placement andmore » injection/ extraction control with the goal of minimizing brine extraction while achieving predefined pressure contraints. The CDE methodology was tested for a simple optimization problem whose solution can be partially obtained with a gradient-based optimization methodology. The CDE successfully estimated the true global optimum for both extraction well location and extraction rate, needed for the test problem. A more complex example application of the developed strategy was also presented for a hypothetical CO 2 storage scenario in a heterogeneous reservoir consisting of a critically stressed fault nearby an injection zone. Through the CDE optimization algorithm coupled to a numerical vertically-averaged reservoir model, we successfully estimated optimal rates and locations for CO 2 injection and brine extraction wells while simultaneously satisfying multiple pressure buildup constraints to avoid fault activation and caprock fracturing. The study shows that the CDE methodology is a very promising tool to solve also other optimization problems related to GCS, such as reducing ‘Area of Review’, monitoring design, reducing risk of leakage and increasing storage capacity and trapping.« less
Akwabi-Ameyaw, Adwoa; Caravella, Justin A; Chen, Lihong; Creech, Katrina L; Deaton, David N; Madauss, Kevin P; Marr, Harry B; Miller, Aaron B; Navas, Frank; Parks, Derek J; Spearing, Paul K; Todd, Dan; Williams, Shawn P; Wisely, G Bruce
2011-10-15
To further explore the optimum placement of the acid moiety in conformationally constrained analogs of GW 4064 1a, a series of stilbene replacements were prepared. The benzothiophene 1f and the indole 1g display the optimal orientation of the carboxylate for enhanced FXR agonist potency. Copyright © 2011 Elsevier Ltd. All rights reserved.
On optimal strategies in event-constrained differential games
NASA Technical Reports Server (NTRS)
Heymann, M.; Rajan, N.; Ardema, M.
1985-01-01
Combat games are formulated as zero-sum differential games with unilateral event constraints. An interior penalty function approach is employed to approximate optimal strategies for the players. The method is very attractive computationally and possesses suitable approximation and convergence properties.
NASA Astrophysics Data System (ADS)
Gaddy, Melissa R.; Yıldız, Sercan; Unkelbach, Jan; Papp, Dávid
2018-01-01
Spatiotemporal fractionation schemes, that is, treatments delivering different dose distributions in different fractions, can potentially lower treatment side effects without compromising tumor control. This can be achieved by hypofractionating parts of the tumor while delivering approximately uniformly fractionated doses to the surrounding tissue. Plan optimization for such treatments is based on biologically effective dose (BED); however, this leads to computationally challenging nonconvex optimization problems. Optimization methods that are in current use yield only locally optimal solutions, and it has hitherto been unclear whether these plans are close to the global optimum. We present an optimization framework to compute rigorous bounds on the maximum achievable normal tissue BED reduction for spatiotemporal plans. The approach is demonstrated on liver tumors, where the primary goal is to reduce mean liver BED without compromising any other treatment objective. The BED-based treatment plan optimization problems are formulated as quadratically constrained quadratic programming (QCQP) problems. First, a conventional, uniformly fractionated reference plan is computed using convex optimization. Then, a second, nonconvex, QCQP model is solved to local optimality to compute a spatiotemporally fractionated plan that minimizes mean liver BED, subject to the constraints that the plan is no worse than the reference plan with respect to all other planning goals. Finally, we derive a convex relaxation of the second model in the form of a semidefinite programming problem, which provides a rigorous lower bound on the lowest achievable mean liver BED. The method is presented on five cases with distinct geometries. The computed spatiotemporal plans achieve 12-35% mean liver BED reduction over the optimal uniformly fractionated plans. This reduction corresponds to 79-97% of the gap between the mean liver BED of the uniform reference plans and our lower bounds on the lowest achievable mean liver BED. The results indicate that spatiotemporal treatments can achieve substantial reductions in normal tissue dose and BED, and that local optimization techniques provide high-quality plans that are close to realizing the maximum potential normal tissue dose reduction.
NASA Astrophysics Data System (ADS)
He, W.; Ju, W.; Chen, H.; Peters, W.; van der Velde, I.; Baker, I. T.; Andrews, A. E.; Zhang, Y.; Launois, T.; Campbell, J. E.; Suntharalingam, P.; Montzka, S. A.
2016-12-01
Carbonyl sulfide (OCS) is a promising novel atmospheric tracer for studying carbon cycle processes. OCS shares a similar pathway as CO2 during photosynthesis but not released through a respiration-like process, thus could be used to partition Gross Primary Production (GPP) from Net Ecosystem-atmosphere CO2 Exchange (NEE). This study uses joint atmospheric observations of OCS and CO2 to constrain GPP and ecosystem respiration (Re). Flask data from tower and aircraft sites over North America are collected. We employ our recently developed CarbonTracker (CT)-Lagrange carbon assimilation system, which is based on the CT framework and the Weather Research and Forecasting - Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) model, and the Simple Biosphere model with simulated OCS (SiB3-OCS) that provides prior GPP, Re and plant uptake fluxes of OCS. Derived plant OCS fluxes from both process model and GPP-scaled model are tested in our inversion. To investigate the ability of OCS to constrain GPP and understand the uncertainty propagated from OCS modeling errors to constrained fluxes in a dual-tracer system including OCS and CO2, two inversion schemes are implemented and compared: (1) a two-step scheme, which firstly optimizes GPP using OCS observations, and then simultaneously optimizes GPP and Re using CO2 observations with OCS-constrained GPP in the first step as prior; (2) a joint scheme, which simultaneously optimizes GPP and Re using OCS and CO2 observations. We will evaluate the result using an estimated GPP from space-borne solar-induced fluorescence observations and a data-driven GPP upscaled from FLUXNET data with a statistical model (Jung et al., 2011). Preliminary result for the year 2010 shows the joint inversion makes simulated mole fractions more consistent with observations for both OCS and CO2. However, the uncertainty of OCS simulation is larger than that of CO2. The two-step and joint schemes perform similarly in improving the consistence with observations for OCS, implicating that OCS could provide independent constraint in joint inversion. Optimization makes less total GPP and Re but more NEE, when testing with prior CO2 fluxes from two biosphere models. This study gives an in-depth insight into the role of joint atmospheric OCS and CO2 observations in constraining CO2 fluxes.
Solving constrained inverse problems for waveform tomography with Salvus
NASA Astrophysics Data System (ADS)
Boehm, C.; Afanasiev, M.; van Driel, M.; Krischer, L.; May, D.; Rietmann, M.; Fichtner, A.
2016-12-01
Finding a good balance between flexibility and performance is often difficult within domain-specific software projects. To achieve this balance, we introduce Salvus: an open-source high-order finite element package built upon PETSc and Eigen, that focuses on large-scale full-waveform modeling and inversion. One of the key features of Salvus is its modular design, based on C++ mixins, that separates the physical equations from the numerical discretization and the mathematical optimization. In this presentation we focus on solving inverse problems with Salvus and discuss (i) dealing with inexact derivatives resulting, e.g., from lossy wavefield compression, (ii) imposing additional constraints on the model parameters, e.g., from effective medium theory, and (iii) integration with a workflow management tool. We present a feasible-point trust-region method for PDE-constrained inverse problems that can handle inexactly computed derivatives. The level of accuracy in the approximate derivatives is controlled by localized error estimates to ensure global convergence of the method. Additional constraints on the model parameters are typically cheap to compute without the need for further simulations. Hence, including them in the trust-region subproblem introduces only a small computational overhead, but ensures feasibility of the model in every iteration. We show examples with homogenization constraints derived from effective medium theory (i.e. all fine-scale updates must upscale to a physically meaningful long-wavelength model). Salvus has a built-in workflow management framework to automate the inversion with interfaces to user-defined misfit functionals and data structures. This significantly reduces the amount of manual user interaction and enhances reproducibility which we demonstrate for several applications from the laboratory to global scale.
Reddington, C. L.; Carslaw, K. S.; Stier, P.; ...
2017-09-01
The largest uncertainty in the historical radiative forcing of climate is caused by changes in aerosol particles due to anthropogenic activity. Sophisticated aerosol microphysics processes have been included in many climate models in an effort to reduce the uncertainty. However, the models are very challenging to evaluate and constrain because they require extensive in situ measurements of the particle size distribution, number concentration, and chemical composition that are not available from global satellite observations. The Global Aerosol Synthesis and Science Project (GASSP) aims to improve the robustness of global aerosol models by combining new methodologies for quantifying model uncertainty, tomore » create an extensive global dataset of aerosol in situ microphysical and chemical measurements, and to develop new ways to assess the uncertainty associated with comparing sparse point measurements with low-resolution models. GASSP has assembled over 45,000 hours of measurements from ships and aircraft as well as data from over 350 ground stations. The measurements have been harmonized into a standardized format that is easily used by modelers and nonspecialist users. Available measurements are extensive, but they are biased to polluted regions of the Northern Hemisphere, leaving large pristine regions and many continental areas poorly sampled. The aerosol radiative forcing uncertainty can be reduced using a rigorous model–data synthesis approach. Nevertheless, our research highlights significant remaining challenges because of the difficulty of constraining many interwoven model uncertainties simultaneously. Although the physical realism of global aerosol models still needs to be improved, the uncertainty in aerosol radiative forcing will be reduced most effectively by systematically and rigorously constraining the models using extensive syntheses of measurements.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddington, C. L.; Carslaw, K. S.; Stier, P.
The largest uncertainty in the historical radiative forcing of climate is caused by changes in aerosol particles due to anthropogenic activity. Sophisticated aerosol microphysics processes have been included in many climate models in an effort to reduce the uncertainty. However, the models are very challenging to evaluate and constrain because they require extensive in situ measurements of the particle size distribution, number concentration, and chemical composition that are not available from global satellite observations. The Global Aerosol Synthesis and Science Project (GASSP) aims to improve the robustness of global aerosol models by combining new methodologies for quantifying model uncertainty, tomore » create an extensive global dataset of aerosol in situ microphysical and chemical measurements, and to develop new ways to assess the uncertainty associated with comparing sparse point measurements with low-resolution models. GASSP has assembled over 45,000 hours of measurements from ships and aircraft as well as data from over 350 ground stations. The measurements have been harmonized into a standardized format that is easily used by modelers and nonspecialist users. Available measurements are extensive, but they are biased to polluted regions of the Northern Hemisphere, leaving large pristine regions and many continental areas poorly sampled. The aerosol radiative forcing uncertainty can be reduced using a rigorous model–data synthesis approach. Nevertheless, our research highlights significant remaining challenges because of the difficulty of constraining many interwoven model uncertainties simultaneously. Although the physical realism of global aerosol models still needs to be improved, the uncertainty in aerosol radiative forcing will be reduced most effectively by systematically and rigorously constraining the models using extensive syntheses of measurements.« less
ERIC Educational Resources Information Center
Porto, Melina
2016-01-01
This article describes a cooperative writing response initiative designed to develop writing skills in foreign/second-language contexts (hereafter L2). The strategy originated from my desire to cater for my learners' need to become better writers in English within a constrained educational environment in Argentina. In this article I describe this…
NASA Astrophysics Data System (ADS)
Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.
2016-09-01
PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.
Selenium Characterization In The Global Rice Supply Chain
For up to 1 billion people worldwide, insufficient dietary intake of selenium (Se) is a serious health constraint. Cereals are the dominant Se source for those on low protein diets, as typified by the global malnourished population. With crop Se content constrained largely by u...
Environmentalism, Globalization and National Economies, 1980-2000
ERIC Educational Resources Information Center
Schofer, Evan; Granados, Francisco J.
2006-01-01
It is commonly assumed that environmentalism harms national economies because environmental regulations constrain economic activity and create incentives for firms to move production and investment to other countries. We point out that global environmentalism involves large-scale institutional changes that: (1) encourage new kinds of economic…
Large historical growth in global terrestrial gross primary production
Campbell, J. E.; Berry, J. A.; Seibt, U.; ...
2017-04-05
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
Large historical growth in global terrestrial gross primary production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, J. E.; Berry, J. A.; Seibt, U.
Growth in terrestrial gross primary production (GPP) may provide a negative feedback for climate change. It remains uncertain, however, to what extent biogeochemical processes can suppress global GPP growth. In consequence, model estimates of terrestrial carbon storage and carbon cycle –climate feedbacks remain poorly constrained. Here we present a global, measurement-based estimate of GPP growth during the twentieth century based on long-term atmospheric carbonyl sulphide (COS) records derived from ice core, firn, and ambient air samples. Here, we interpret these records using a model that simulates changes in COS concentration due to changes in its sources and sinks, including amore » large sink that is related to GPP. We find that the COS record is most consistent with climate-carbon cycle model simulations that assume large GPP growth during the twentieth century (31% ± 5%; mean ± 95% confidence interval). Finally, while this COS analysis does not directly constrain estimates of future GPP growth it provides a global-scale benchmark for historical carbon cycle simulations.« less
Robust Constrained Blackbox Optimization with Surrogates
2015-05-21
algorithms with OPAL . Mathematical Programming Computation, 6(3):233–254, 2014. 6. M.S. Ouali, H. Aoudjit, and C. Audet. Replacement scheduling of a fleet of...Orban. Optimization of Algorithms with OPAL . Mathematical Programming Computation, 6(3), 233-254, September 2014. DISTRIBUTION A: Distribution
Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Tsai, F. T.
2012-12-01
Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.
Optimal Control of Evolution Mixed Variational Inclusions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alduncin, Gonzalo, E-mail: alduncin@geofisica.unam.mx
2013-12-15
Optimal control problems of primal and dual evolution mixed variational inclusions, in reflexive Banach spaces, are studied. The solvability analysis of the mixed state systems is established via duality principles. The optimality analysis is performed in terms of perturbation conjugate duality methods, and proximation penalty-duality algorithms to mixed optimality conditions are further presented. Applications to nonlinear diffusion constrained problems as well as quasistatic elastoviscoplastic bilateral contact problems exemplify the theory.
Characterizing biospheric carbon balance using CO2 observations from the OCO-2 satellite
NASA Astrophysics Data System (ADS)
Miller, Scot M.; Michalak, Anna M.; Yadav, Vineet; Tadić, Jovan M.
2018-05-01
NASA's Orbiting Carbon Observatory 2 (OCO-2) satellite launched in summer of 2014. Its observations could allow scientists to constrain CO2 fluxes across regions or continents that were previously difficult to monitor. This study explores an initial step toward that goal; we evaluate the extent to which current OCO-2 observations can detect patterns in biospheric CO2 fluxes and constrain monthly CO2 budgets. Our goal is to guide top-down, inverse modeling studies and identify areas for future improvement. We find that uncertainties and biases in the individual OCO-2 observations are comparable to the atmospheric signal from biospheric fluxes, particularly during Northern Hemisphere winter when biospheric fluxes are small. A series of top-down experiments indicate how these errors affect our ability to constrain monthly biospheric CO2 budgets. We are able to constrain budgets for between two and four global regions using OCO-2 observations, depending on the month, and we can constrain CO2 budgets at the regional level (i.e., smaller than seven global biomes) in only a handful of cases (16 % of all regions and months). The potential of the OCO-2 observations, however, is greater than these results might imply. A set of synthetic data experiments suggests that retrieval errors have a salient effect. Advances in retrieval algorithms and to a lesser extent atmospheric transport modeling will improve the results. In the interim, top-down studies that use current satellite observations are best-equipped to constrain the biospheric carbon balance across only continental or hemispheric regions.
PopED lite: An optimal design software for preclinical pharmacokinetic and pharmacodynamic studies.
Aoki, Yasunori; Sundqvist, Monika; Hooker, Andrew C; Gennemark, Peter
2016-04-01
Optimal experimental design approaches are seldom used in preclinical drug discovery. The objective is to develop an optimal design software tool specifically designed for preclinical applications in order to increase the efficiency of drug discovery in vivo studies. Several realistic experimental design case studies were collected and many preclinical experimental teams were consulted to determine the design goal of the software tool. The tool obtains an optimized experimental design by solving a constrained optimization problem, where each experimental design is evaluated using some function of the Fisher Information Matrix. The software was implemented in C++ using the Qt framework to assure a responsive user-software interaction through a rich graphical user interface, and at the same time, achieving the desired computational speed. In addition, a discrete global optimization algorithm was developed and implemented. The software design goals were simplicity, speed and intuition. Based on these design goals, we have developed the publicly available software PopED lite (http://www.bluetree.me/PopED_lite). Optimization computation was on average, over 14 test problems, 30 times faster in PopED lite compared to an already existing optimal design software tool. PopED lite is now used in real drug discovery projects and a few of these case studies are presented in this paper. PopED lite is designed to be simple, fast and intuitive. Simple, to give many users access to basic optimal design calculations. Fast, to fit a short design-execution cycle and allow interactive experimental design (test one design, discuss proposed design, test another design, etc). Intuitive, so that the input to and output from the software tool can easily be understood by users without knowledge of the theory of optimal design. In this way, PopED lite is highly useful in practice and complements existing tools. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Number-unconstrained quantum sensing
NASA Astrophysics Data System (ADS)
Mitchell, Morgan W.
2017-12-01
Quantum sensing is commonly described as a constrained optimization problem: maximize the information gained about an unknown quantity using a limited number of particles. Important sensors including gravitational wave interferometers and some atomic sensors do not appear to fit this description, because there is no external constraint on particle number. Here, we develop the theory of particle-number-unconstrained quantum sensing, and describe how optimal particle numbers emerge from the competition of particle-environment and particle-particle interactions. We apply the theory to optical probing of an atomic medium modeled as a resonant, saturable absorber, and observe the emergence of well-defined finite optima without external constraints. The results contradict some expectations from number-constrained quantum sensing and show that probing with squeezed beams can give a large sensitivity advantage over classical strategies when each is optimized for particle number.
Colbert, Alison M; Goshin, Lorie S; Durand, Vanessa; Zoucha, Rick; Sekula, L Kathleen
2016-12-01
Health priorities of women after incarceration remain poorly understood, constraining development of interventions targeted at their health during that time. We explored the experience of health and health care after incarceration in a focused ethnography of 28 women who had been released from prison or jail within the past year and were living in community corrections facilities. The women's outlook on health was rooted in a newfound core optimism, but this was constrained by their pressing health-related issues; stress and uncertainty; and the pressures of the criminal justice system. These external forces threatened to cause collapse of women's core optimism. Findings support interventions that capitalize on women's optimism and address barriers specific to criminal justice contexts. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
An, Y; Liang, J; Liu, W
2015-06-15
Purpose: We propose to apply a probabilistic framework, namely chanceconstrained optimization, in the intensity-modulated proton therapy (IMPT) planning subject to range and patient setup uncertainties. The purpose is to hedge against the influence of uncertainties and improve robustness of treatment plans. Methods: IMPT plans were generated for a typical prostate patient. Nine dose distributions are computed — the nominal one and one each for ±5mm setup uncertainties along three cardinal axes and for ±3.5% range uncertainty. These nine dose distributions are supplied to the solver CPLEX as chance constraints to explicitly control plan robustness under these representative uncertainty scenarios withmore » certain probability. This probability is determined by the tolerance level. We make the chance-constrained model tractable by converting it to a mixed integer optimization problem. The quality of plans derived from this method is evaluated using dose-volume histogram (DVH) indices such as tumor dose homogeneity (D5% – D95%) and coverage (D95%) and normal tissue sparing like V70 of rectum, V65, and V40 of bladder. We also compare the results from this novel method with the conventional PTV-based method to further demonstrate its effectiveness Results: Our model can yield clinically acceptable plans within 50 seconds. The chance-constrained optimization produces IMPT plans with comparable target coverage, better target dose homogeneity, and better normal tissue sparing compared to the PTV-based optimization [D95% CTV: 67.9 vs 68.7 (Gy), D5% – D95% CTV: 11.9 vs 18 (Gy), V70 rectum: 0.0 % vs 0.33%, V65 bladder: 2.17% vs 9.33%, V40 bladder: 8.83% vs 21.83%]. It also simultaneously makes the plan more robust [Width of DVH band at D50%: 2.0 vs 10.0 (Gy)]. The tolerance level may be varied to control the tradeoff between plan robustness and quality. Conclusion: The chance-constrained optimization generates superior IMPT plan compared to the PTV-based optimization with explicit control of plan robustness. NIH/NCI K25CA168984, Eagles Cancer Research Career Development, The Lawrence W. and Marilyn W. Matteson Fund for Cancer Research, Mayo ASU Seed Grant, and The Kemper Marley Foundation.« less
Maximum entropy production: Can it be used to constrain conceptual hydrological models?
M.C. Westhoff; E. Zehe
2013-01-01
In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...
NASA Astrophysics Data System (ADS)
Malanotte-Rizzoli, Paola; Young, Roberta E.
1995-12-01
The primary objective of this paper is to assess the relative effectiveness of data sets with different space coverage and time resolution when they are assimilated into an ocean circulation model. We focus on obtaining realistic numerical simulations of the Gulf Stream system typically of the order of 3-month duration by constructing a "synthetic" ocean simultaneously consistent with the model dynamics and the observations. The model used is the Semispectral Primitive Equation Model. The data sets are the "global" Optimal Thermal Interpolation Scheme (OTIS) 3 of the Fleet Numerical Oceanography Center providing temperature and salinity fields with global coverage and with bi-weekly frequency, and the localized measurements, mostly of current velocities, from the central and eastern array moorings of the Synoptic Ocean Prediction (SYNOP) program, with daily frequency but with a very small spatial coverage. We use a suboptimal assimilation technique ("nudging"). Even though this technique has already been used in idealized data assimilation studies, to our knowledge this is the first study in which the effectiveness of nudging is tested by assimilating real observations of the interior temperature and salinity fields. This is also the first work in which a systematic assimilation is carried out of the localized, high-quality SYNOP data sets in numerical experiments longer than 1-2 weeks, that is, not aimed to forecasting. We assimilate (1) the global OTIS 3 alone, (2) the local SYNOP observations alone, and (3) both OTIS 3 and SYNOP observations. We assess the success of the assimilations with quantitative measures of performance, both on the global and local scale. The results can be summarized as follows. The intermittent assimilation of the global OTIS 3 is necessary to keep the model "on track" over 3-month simulations on the global scale. As OTIS 3 is assimilated at every model grid point, a "gentle" weight must be prescribed to it so as not to overconstrain the model. However, in these assimilations the predicted velocity fields over the SYNOP arrays are greatly in error. The continuous assimilation of the localized SYNOP data sets with a strong weight is necessary to obtain local realistic evolutions. Then assimilation of velocity measurements alone recovers the density structure over the array area. However, the spatial coverage of the SYNOP measurements is too small to constrain the model on the global scale. Thus the blending of both types of datasets is necessary in the assimilation as they constrain different time and space scales. Our choice of "gentle" nudging weight for the global OTIS 3 and "strong" weight for the local SYNOP data provides for realistic simulations of the Gulf Stream system, both globally and locally, on the 3- to 4-month-long timescale, the one governed by the Gulf Stream jet internal dynamics.
Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.
Heydari, Ali; Balakrishnan, Sivasubramanya N
2013-01-01
To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.
NASA Astrophysics Data System (ADS)
Khan, M. M. A.; Romoli, L.; Fiaschi, M.; Dini, G.; Sarri, F.
2011-02-01
This paper presents an experimental design approach to process parameter optimization for the laser welding of martensitic AISI 416 and AISI 440FSe stainless steels in a constrained overlap configuration in which outer shell was 0.55 mm thick. To determine the optimal laser-welding parameters, a set of mathematical models were developed relating welding parameters to each of the weld characteristics. These were validated both statistically and experimentally. The quality criteria set for the weld to determine optimal parameters were the minimization of weld width and the maximization of weld penetration depth, resistance length and shearing force. Laser power and welding speed in the range 855-930 W and 4.50-4.65 m/min, respectively, with a fiber diameter of 300 μm were identified as the optimal set of process parameters. However, the laser power and welding speed can be reduced to 800-840 W and increased to 4.75-5.37 m/min, respectively, to obtain stronger and better welds.
Constraints on global oceanic emissions of N2O from observations and models
NASA Astrophysics Data System (ADS)
Buitenhuis, Erik T.; Suntharalingam, Parvadha; Le Quéré, Corinne
2018-04-01
We estimate the global ocean N2O flux to the atmosphere and its confidence interval using a statistical method based on model perturbation simulations and their fit to a database of ΔpN2O (n = 6136). We evaluate two submodels of N2O production. The first submodel splits N2O production into oxic and hypoxic pathways following previous publications. The second submodel explicitly represents the redox transformations of N that lead to N2O production (nitrification and hypoxic denitrification) and N2O consumption (suboxic denitrification), and is presented here for the first time. We perturb both submodels by modifying the key parameters of the N2O cycling pathways (nitrification rates; NH4+ uptake; N2O yields under oxic, hypoxic and suboxic conditions) and determine a set of optimal model parameters by minimisation of a cost function against four databases of N cycle observations. Our estimate of the global oceanic N2O flux resulting from this cost function minimisation derived from observed and model ΔpN2O concentrations is 2.4 ± 0.8 and 2.5 ± 0.8 Tg N yr-1 for the two N2O submodels. These estimates suggest that the currently available observational data of surface ΔpN2O constrain the global N2O flux to a narrower range relative to the large range of results presented in the latest IPCC report.
Travel time tomography with local image regularization by sparsity constrained dictionary learning
NASA Astrophysics Data System (ADS)
Bianco, M.; Gerstoft, P.
2017-12-01
We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.
The global economic long-term potential of modern biomass in a climate-constrained world
NASA Astrophysics Data System (ADS)
Klein, David; Humpenöder, Florian; Bauer, Nico; Dietrich, Jan Philipp; Popp, Alexander; Bodirsky, Benjamin Leon; Bonsch, Markus; Lotze-Campen, Hermann
2014-07-01
Low-stabilization scenarios consistent with the 2 °C target project large-scale deployment of purpose-grown lignocellulosic biomass. In case a GHG price regime integrates emissions from energy conversion and from land-use/land-use change, the strong demand for bioenergy and the pricing of terrestrial emissions are likely to coincide. We explore the global potential of purpose-grown lignocellulosic biomass and ask the question how the supply prices of biomass depend on prices for greenhouse gas (GHG) emissions from the land-use sector. Using the spatially explicit global land-use optimization model MAgPIE, we construct bioenergy supply curves for ten world regions and a global aggregate in two scenarios, with and without a GHG tax. We find that the implementation of GHG taxes is crucial for the slope of the supply function and the GHG emissions from the land-use sector. Global supply prices start at 5 GJ-1 and increase almost linearly, doubling at 150 EJ (in 2055 and 2095). The GHG tax increases bioenergy prices by 5 GJ-1 in 2055 and by 10 GJ-1 in 2095, since it effectively stops deforestation and thus excludes large amounts of high-productivity land. Prices additionally increase due to costs for N2O emissions from fertilizer use. The GHG tax decreases global land-use change emissions by one-third. However, the carbon emissions due to bioenergy production increase by more than 50% from conversion of land that is not under emission control. Average yields required to produce 240 EJ in 2095 are roughly 600 GJ ha-1 yr-1 with and without tax.
Stochastic Averaging for Constrained Optimization With Application to Online Resource Allocation
NASA Astrophysics Data System (ADS)
Chen, Tianyi; Mokhtari, Aryan; Wang, Xin; Ribeiro, Alejandro; Giannakis, Georgios B.
2017-06-01
Existing approaches to resource allocation for nowadays stochastic networks are challenged to meet fast convergence and tolerable delay requirements. The present paper leverages online learning advances to facilitate stochastic resource allocation tasks. By recognizing the central role of Lagrange multipliers, the underlying constrained optimization problem is formulated as a machine learning task involving both training and operational modes, with the goal of learning the sought multipliers in a fast and efficient manner. To this end, an order-optimal offline learning approach is developed first for batch training, and it is then generalized to the online setting with a procedure termed learn-and-adapt. The novel resource allocation protocol permeates benefits of stochastic approximation and statistical learning to obtain low-complexity online updates with learning errors close to the statistical accuracy limits, while still preserving adaptation performance, which in the stochastic network optimization context guarantees queue stability. Analysis and simulated tests demonstrate that the proposed data-driven approach improves the delay and convergence performance of existing resource allocation schemes.
An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.
Optimization of constrained density functional theory
NASA Astrophysics Data System (ADS)
O'Regan, David D.; Teobaldi, Gilberto
2016-07-01
Constrained density functional theory (cDFT) is a versatile electronic structure method that enables ground-state calculations to be performed subject to physical constraints. It thereby broadens their applicability and utility. Automated Lagrange multiplier optimization is necessary for multiple constraints to be applied efficiently in cDFT, for it to be used in tandem with geometry optimization, or with molecular dynamics. In order to facilitate this, we comprehensively develop the connection between cDFT energy derivatives and response functions, providing a rigorous assessment of the uniqueness and character of cDFT stationary points while accounting for electronic interactions and screening. In particular, we provide a nonperturbative proof that stable stationary points of linear density constraints occur only at energy maxima with respect to their Lagrange multipliers. We show that multiple solutions, hysteresis, and energy discontinuities may occur in cDFT. Expressions are derived, in terms of convenient by-products of cDFT optimization, for quantities such as the dielectric function and a condition number quantifying ill definition in multiple constraint cDFT.
Climate change in fish: effects of respiratory constraints on optimal life history and behaviour.
Holt, Rebecca E; Jørgensen, Christian
2015-02-01
The difference between maximum metabolic rate and standard metabolic rate is referred to as aerobic scope, and because it constrains performance it is suggested to constitute a key limiting process prescribing how fish may cope with or adapt to climate warming. We use an evolutionary bioenergetics model for Atlantic cod (Gadus morhua) to predict optimal life histories and behaviours at different temperatures. The model assumes common trade-offs and predicts that optimal temperatures for growth and fitness lie below that for aerobic scope; aerobic scope is thus a poor predictor of fitness at high temperatures. Initially, warming expands aerobic scope, allowing for faster growth and increased reproduction. Beyond the optimal temperature for fitness, increased metabolic requirements intensify foraging and reduce survival; oxygen budgeting conflicts thus constrain successful completion of the life cycle. The model illustrates how physiological adaptations are part of a suite of traits that have coevolved. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
A formulation and analysis of combat games
NASA Technical Reports Server (NTRS)
Heymann, M.; Ardema, M. D.; Rajan, N.
1984-01-01
Combat which is formulated as a dynamical encounter between two opponents, each of whom has offensive capabilities and objectives is outlined. A target set is associated with each opponent in the event space in which he endeavors to terminate the combat, thereby winning. If the combat terminates in both target sets simultaneously, or in neither, a joint capture or a draw, respectively, occurs. Resolution of the encounter is formulated as a combat game; as a pair of competing event constrained differential games. If exactly one of the players can win, the optimal strategies are determined from a resulting constrained zero sum differential game. Otherwise the optimal strategies are computed from a resulting nonzero sum game. Since optimal combat strategies may frequently not exist, approximate or delta combat games are also formulated leading to approximate or delta optimal strategies. The turret game is used to illustrate combat games. This game is sufficiently complex to exhibit a rich variety of combat behavior, much of which is not found in pursuit evasion games.
Inverse-model estimates of the ocean's coupled phosphorus, silicon, and iron cycles
NASA Astrophysics Data System (ADS)
Pasquier, Benoît; Holzer, Mark
2017-09-01
The ocean's nutrient cycles are important for the carbon balance of the climate system and for shaping the ocean's distribution of dissolved elements. Dissolved iron (dFe) is a key limiting micronutrient, but iron scavenging is observationally poorly constrained, leading to large uncertainties in the external sources of iron and hence in the state of the marine iron cycle. Here we build a steady-state model of the ocean's coupled phosphorus, silicon, and iron cycles embedded in a data-assimilated steady-state global ocean circulation. The model includes the redissolution of scavenged iron, parameterization of subgrid topography, and small, large, and diatom phytoplankton functional classes. Phytoplankton concentrations are implicitly represented in the parameterization of biological nutrient utilization through an equilibrium logistic model. Our formulation thus has only three coupled nutrient tracers, the three-dimensional distributions of which are found using a Newton solver. The very efficient numerics allow us to use the model in inverse mode to objectively constrain many biogeochemical parameters by minimizing the mismatch between modeled and observed nutrient and phytoplankton concentrations. Iron source and sink parameters cannot jointly be optimized because of local compensation between regeneration, recycling, and scavenging. We therefore consider a family of possible state estimates corresponding to a wide range of external iron source strengths. All state estimates have a similar mismatch with the observed nutrient concentrations and very similar large-scale dFe distributions. However, the relative contributions of aeolian, sedimentary, and hydrothermal iron to the total dFe concentration differ widely depending on the sources. Both the magnitude and pattern of the phosphorus and opal exports are well constrained, with global values of 8. 1 ± 0. 3 Tmol P yr-1 (or, in carbon units, 10. 3 ± 0. 4 Pg C yr-1) and 171. ± 3. Tmol Si yr-1. We diagnose the phosphorus and opal exports supported by aeolian, sedimentary, and hydrothermal iron. The geographic patterns of the export supported by each iron type are well constrained across the family of state estimates. Sedimentary-iron-supported export is important in shelf and large-scale upwelling regions, while hydrothermal iron contributes to export mostly in the Southern Ocean. The fraction of the global export supported by a given iron type varies systematically with its fractional contribution to the total iron source. Aeolian iron is most efficient in supporting export in the sense that its fractional contribution to export exceeds its fractional contribution to the total source. Per source-injected molecule, aeolian iron supports 3. 1 ± 0. 8 times more phosphorus export and 2. 0 ± 0. 5 times more opal export than the other iron types. Conversely, per injected molecule, sedimentary and hydrothermal iron support 2. 3 ± 0. 6 and 4. ± 2. times less phosphorus export, and 1. 9 ± 0. 5 and 2. ± 1. times less opal export than the other iron types.
NASA Astrophysics Data System (ADS)
Peltier, W. R.; Argus, D.; Drummond, R.; Moore, A. W.
2012-12-01
We compare, on a global basis, estimates of site velocity against predictions of the newly constructed postglacial rebound model ICE-6G (VM5a). This model is fit to observations of North American postglacial rebound thereby demonstrating that the ice sheet at last glacial maximum must have been, relative to ICE-5G,thinner in southern Manitoba, thinner near Yellowknife (northwest Territories), thicker in eastern and southern Quebec, and thicker along the British Columbia-Alberta border. The GPS based estimates of site velocity that we employ are more accurate than were previously available because they are based on GPS estimates of position as a function of time determined by incorporating satellite phase center variations [Desai et al. 2011]. These GPS estimates are constraining postglacial rebound in North America and Europe more tightly than ever before. In particular, given the high density of GPS sites in North America, and the fact that the velocity of the mass center (CM) of Earth is also more tightly constrained, the new model much more strongly constrains both the lateral extent of the proglacial forebulge and the rate at which this peripheral bulge (that was emplaced peripheral to the late Pleistocence Laurentia ice sheet) is presently collapsing. This fact proves to be important to the more accurate inference of the current rate of ice loss from both Greenland and Alaska based upon the time dependent gravity observations being provided by the GRACE satellite system. In West Antarctica we have also been able to significantly revise the previously prevalent ICE-5G deglaciation history so as to enable its predictions to be optimally consistent with GPS site velocities determined by connecting campaign WAGN measurements to those provided by observations from the permanent ANET sites. Ellsworth Land (south of the Antarctic peninsula), is observed to be rising at 6 ±3 mm/yr according to our latest analyses; the Ellsworth mountains themselves are observed to be rising at 5 ±4 mm/yr; Palmer Land is observed to be rising at 3 ±3 mm/yr. The predictions of the ICE-5G (VM2) model and those of the postglacial rebound component of the model of Simons, Ivins, and James [2010] had predicted uplift to be significantly faster than observed in this region, as previously documented in Argus et al [2011]. From a global perspective the new ICE-6G (VM5a) model is also a further significant improvement on the previous ICE-5G (VM2) model in that the degree two and order one components of its predicted time dependence of geoid height are tightly constrained by the recent inferences of Roy and Peltier [2011] of the post-GRACE-launch values of the speed and direction of true polar wander and the non-tidal acceleration of the lod. .
NASA Technical Reports Server (NTRS)
Giesy, D. P.
1978-01-01
A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
NASA Astrophysics Data System (ADS)
Bruynooghe, Michel M.
1998-04-01
In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.
Distributed Constrained Optimization with Semicoordinate Transformations
NASA Technical Reports Server (NTRS)
Macready, William; Wolpert, David
2006-01-01
Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.
Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki
2013-01-01
A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.
SU-E-I-23: A General KV Constrained Optimization of CNR for CT Abdominal Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weir, V; Zhang, J
Purpose: While Tube current modulation has been well accepted for CT dose reduction, kV adjusting in clinical settings is still at its early stage. This is mainly due to the limited kV options of most current CT scanners. kV adjusting can potentially reduce radiation dose and optimize image quality. This study is to optimize CT abdomen imaging acquisition based on the assumption of a continuous kV, with the goal to provide the best contrast to noise ratio (CNR). Methods: For a given dose (CTDIvol) level, the CNRs at different kV and pitches were measured with an ACR GAMMEX phantom. Themore » phantom was scanned in a Siemens Sensation 64 scanner and a GE VCT 64 scanner. A constrained mathematical optimization was used to find the kV which led to the highest CNR for the anatomy and pitch setting. Parametric equations were obtained from polynomial fitting of plots of kVs vs CNRs. A suitable constraint region for optimization was chosen. Subsequent optimization yielded a peak CNR at a particular kV for different collimations and pitch setting. Results: The constrained mathematical optimization approach yields kV of 114.83 and 113.46, with CNRs of 1.27 and 1.11 at the pitch of 1.2 and 1.4, respectively, for the Siemens Sensation 64 scanner with the collimation of 32 x 0.625mm. An optimized kV of 134.25 and 1.51 CNR is obtained for a GE VCT 64 slice scanner with a collimation of 32 x 0.625mm and a pitch of 0.969. At 0.516 pitch and 32 x 0.625 mm an optimized kV of 133.75 and a CNR of 1.14 was found for the GE VCT 64 slice scanner. Conclusion: CNR in CT image acquisition can be further optimized with a continuous kV option instead of current discrete or fixed kV settings. A continuous kV option is a key for individualized CT protocols.« less
Distributed Channel Allocation and Time Slot Optimization for Green Internet of Things.
Ding, Kaiqi; Zhao, Haitao; Hu, Xiping; Wei, Jibo
2017-10-28
In sustainable smart cities, power saving is a severe challenge in the energy-constrained Internet of Things (IoT). Efficient utilization of limited multiple non-overlap channels and time resources is a promising solution to reduce the network interference and save energy consumption. In this paper, we propose a joint channel allocation and time slot optimization solution for IoT. First, we propose a channel ranking algorithm which enables each node to rank its available channels based on the channel properties. Then, we propose a distributed channel allocation algorithm so that each node can choose a proper channel based on the channel ranking and its own residual energy. Finally, the sleeping duration and spectrum sensing duration are jointly optimized to maximize the normalized throughput and satisfy energy consumption constraints simultaneously. Different from the former approaches, our proposed solution requires no central coordination or any global information that each node can operate based on its own local information in a total distributed manner. Also, theoretical analysis and extensive simulations have validated that when applying our solution in the network of IoT: (i) each node can be allocated to a proper channel based on the residual energy to balance the lifetime; (ii) the network can rapidly converge to a collision-free transmission through each node's learning ability in the process of the distributed channel allocation; and (iii) the network throughput is further improved via the dynamic time slot optimization.
Crown, William; Buyukkaramikli, Nasuh; Thokala, Praveen; Morton, Alec; Sir, Mustafa Y; Marshall, Deborah A; Tosh, Jon; Padula, William V; Ijzerman, Maarten J; Wong, Peter K; Pasupathy, Kalyan S
2017-03-01
Providing health services with the greatest possible value to patients and society given the constraints imposed by patient characteristics, health care system characteristics, budgets, and so forth relies heavily on the design of structures and processes. Such problems are complex and require a rigorous and systematic approach to identify the best solution. Constrained optimization is a set of methods designed to identify efficiently and systematically the best solution (the optimal solution) to a problem characterized by a number of potential solutions in the presence of identified constraints. This report identifies 1) key concepts and the main steps in building an optimization model; 2) the types of problems for which optimal solutions can be determined in real-world health applications; and 3) the appropriate optimization methods for these problems. We first present a simple graphical model based on the treatment of "regular" and "severe" patients, which maximizes the overall health benefit subject to time and budget constraints. We then relate it back to how optimization is relevant in health services research for addressing present day challenges. We also explain how these mathematical optimization methods relate to simulation methods, to standard health economic analysis techniques, and to the emergent fields of analytics and machine learning. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Optimality conditions for the numerical solution of optimization problems with PDE constraints :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro; Ridzal, Denis
2014-03-01
A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleidon, Alex; Kravitz, Benjamin S.; Renner, Maik
2015-01-16
We derive analytic expressions of the transient response of the hydrological cycle to surface warming from an extremely simple energy balance model in which turbulent heat fluxes are constrained by the thermodynamic limit of maximum power. For a given magnitude of steady-state temperature change, this approach predicts the transient response as well as the steady-state change in surface energy partitioning and the hydrologic cycle. We show that the transient behavior of the simple model as well as the steady state hydrological sensitivities to greenhouse warming and solar geoengineering are comparable to results from simulations using highly complex models. Many ofmore » the global-scale hydrological cycle changes can be understood from a surface energy balance perspective, and our thermodynamically-constrained approach provides a physically robust way of estimating global hydrological changes in response to altered radiative forcing.« less
CSOLNP: Numerical Optimization Engine for Solving Non-linearly Constrained Problems.
Zahery, Mahsa; Maes, Hermine H; Neale, Michael C
2017-08-01
We introduce the optimizer CSOLNP, which is a C++ implementation of the R package RSOLNP (Ghalanos & Theussl, 2012, Rsolnp: General non-linear optimization using augmented Lagrange multiplier method. R package version, 1) alongside some improvements. CSOLNP solves non-linearly constrained optimization problems using a Sequential Quadratic Programming (SQP) algorithm. CSOLNP, NPSOL (a very popular implementation of SQP method in FORTRAN (Gill et al., 1986, User's guide for NPSOL (version 4.0): A Fortran package for nonlinear programming (No. SOL-86-2). Stanford, CA: Stanford University Systems Optimization Laboratory), and SLSQP (another SQP implementation available as part of the NLOPT collection (Johnson, 2014, The NLopt nonlinear-optimization package. Retrieved from http://ab-initio.mit.edu/nlopt)) are three optimizers available in OpenMx package. These optimizers are compared in terms of runtimes, final objective values, and memory consumption. A Monte Carlo analysis of the performance of the optimizers was performed on ordinal and continuous models with five variables and one or two factors. While the relative difference between the objective values is less than 0.5%, CSOLNP is in general faster than NPSOL and SLSQP for ordinal analysis. As for continuous data, none of the optimizers performs consistently faster than the others. In terms of memory usage, we used Valgrind's heap profiler tool, called Massif, on one-factor threshold models. CSOLNP and NPSOL consume the same amount of memory, while SLSQP uses 71 MB more memory than the other two optimizers.
ERIC Educational Resources Information Center
Boronico, Jess; Murdy, Jim; Kong, Xinlu
2014-01-01
This manuscript proposes a mathematical model to address faculty sufficiency requirements towards assuring overall high quality management education at a global university. Constraining elements include full-time faculty coverage by discipline, location, and program, across multiple campus locations subject to stated service quality standards of…
REGIONAL ASSESSMENT OF METHANE EMISSION RATES FROM RESERVOIRS IN THE MIDWESTERN UNITED STATES
Reservoirs are a globally significant source of methane (CH4) to the atmosphere, but regional and global emission estimates are poorly constrained due to high variability in emission rates among reservoirs and a lack of measurements in some areas geographic areas. Methane emissi...
Global Concerns and Local Realities: The "Making Education Inclusive" Conference in Johannesburg
ERIC Educational Resources Information Center
Walton, Elizabeth
2015-01-01
Inclusive education is a global phenomenon expressed differently in various countries, and different contextual realities support or constrain the process of making education more inclusive. This column reports on an international conference on inclusive education in Johannesburg, South Africa, which provided the opportunity for delegates to share…
Optimal configuration of microstructure in ferroelectric materials by stochastic optimization
NASA Astrophysics Data System (ADS)
Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.
2010-07-01
An optimization procedure determining the ideal configuration at the microstructural level of ferroelectric (FE) materials is applied to maximize piezoelectricity. Piezoelectricity in ceramic FEs differs significantly from that of single crystals because of the presence of crystallites (grains) possessing crystallographic axes aligned imperfectly. The piezoelectric properties of a polycrystalline (ceramic) FE is inextricably related to the grain orientation distribution (texture). The set of combination of variables, known as solution space, which dictates the texture of a ceramic is unlimited and hence the choice of the optimal solution which maximizes the piezoelectricity is complicated. Thus, a stochastic global optimization combined with homogenization is employed for the identification of the optimal granular configuration of the FE ceramic microstructure with optimum piezoelectric properties. The macroscopic equilibrium piezoelectric properties of polycrystalline FE is calculated using mathematical homogenization at each iteration step. The configuration of grains characterized by its orientations at each iteration is generated using a randomly selected set of orientation distribution parameters. The optimization procedure applied to the single crystalline phase compares well with the experimental data. Apparent enhancement of piezoelectric coefficient d33 is observed in an optimally oriented BaTiO3 single crystal. Based on the good agreement of results with the published data in single crystals, we proceed to apply the methodology in polycrystals. A configuration of crystallites, simultaneously constraining the orientation distribution of the c-axis (polar axis) while incorporating ab-plane randomness, which would multiply the overall piezoelectricity in ceramic BaTiO3 is also identified. The orientation distribution of the c-axes is found to be a narrow Gaussian distribution centered around 45°. The piezoelectric coefficient in such a ceramic is found to be nearly three times as that of the single crystal. Our optimization model provide designs for materials with enhanced piezoelectric performance, which would stimulate further studies involving materials possessing higher spontaneous polarization.
NASA Astrophysics Data System (ADS)
Yazdanpanah Moghadam, Peyman; Quaegebeur, Nicolas; Masson, Patrice
2015-01-01
Piezoelectric transducers are commonly used in structural health monitoring systems to generate and measure ultrasonic guided waves (GWs) by applying interfacial shear and normal stresses to the host structure. In most cases, in order to perform damage detection, advanced signal processing techniques are required, since a minimum of two dispersive modes are propagating in the host structure. In this paper, a systematic approach for mode selection is proposed by optimizing the interfacial shear stress profile applied to the host structure, representing the first step of a global optimization of selective mode actuator design. This approach has the potential of reducing the complexity of signal processing tools as the number of propagating modes could be reduced. Using the superposition principle, an analytical method is first developed for GWs excitation by a finite number of uniform segments, each contributing with a given elementary shear stress profile. Based on this, cost functions are defined in order to minimize the undesired modes and amplify the selected mode and the optimization problem is solved with a parallel genetic algorithm optimization framework. Advantages of this method over more conventional transducers tuning approaches are that (1) the shear stress can be explicitly optimized to both excite one mode and suppress other undesired modes, (2) the size of the excitation area is not constrained and mode-selective excitation is still possible even if excitation width is smaller than all excited wavelengths, and (3) the selectivity is increased and the bandwidth extended. The complexity of the optimal shear stress profile obtained is shown considering two cost functions with various optimal excitation widths and number of segments. Results illustrate that the desired mode (A0 or S0) can be excited dominantly over other modes up to a wave power ratio of 1010 using an optimal shear stress profile.
Benchmarking optimization software with COPS 3.0.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolan, E. D.; More, J. J.; Munson, T. S.
2004-05-24
The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They also provide a comparison of the FILTER, KNITRO, LOQO, MINOS, and SNOPT solvers on these problems.
Evaluating data worth for ground-water management under uncertainty
Wagner, B.J.
1999-01-01
A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.
Chang, Y K; Lim, H C
1989-08-20
A multivariable on-line adaptive optimization algorithm using a bilevel forgetting factor method was developed and applied to a continuous baker's yeast culture in simulation and experimental studies to maximize the cellular productivity by manipulating the dilution rate and the temperature. The algorithm showed a good optimization speed and a good adaptability and reoptimization capability. The algorithm was able to stably maintain the process around the optimum point for an extended period of time. Two cases were investigated: an unconstrained and a constrained optimization. In the constrained optimization the ethanol concentration was used as an index for the baking quality of yeast cells. An equality constraint with a quadratic penalty was imposed on the ethanol concentration to keep its level close to a hypothetical "optimum" value. The developed algorithm was experimentally applied to a baker's yeast culture to demonstrate its validity. Only unconstrained optimization was carried out experimentally. A set of tuning parameter values was suggested after evaluating the results from several experimental runs. With those tuning parameter values the optimization took 50-90 h. At the attained steady state the dilution rate was 0.310 h(-1) the temperature 32.8 degrees C, and the cellular productivity 1.50 g/L/h.
A subgradient approach for constrained binary optimization via quantum adiabatic evolution
NASA Astrophysics Data System (ADS)
Karimi, Sahar; Ronagh, Pooya
2017-08-01
Outer approximation method has been proposed for solving the Lagrangian dual of a constrained binary quadratic programming problem via quantum adiabatic evolution in the literature. This should be an efficient prescription for solving the Lagrangian dual problem in the presence of an ideally noise-free quantum adiabatic system. However, current implementations of quantum annealing systems demand methods that are efficient at handling possible sources of noise. In this paper, we consider a subgradient method for finding an optimal primal-dual pair for the Lagrangian dual of a constrained binary polynomial programming problem. We then study the quadratic stable set (QSS) problem as a case study. We see that this method applied to the QSS problem can be viewed as an instance-dependent penalty-term approach that avoids large penalty coefficients. Finally, we report our experimental results of using the D-Wave 2X quantum annealer and conclude that our approach helps this quantum processor to succeed more often in solving these problems compared to the usual penalty-term approaches.
Rapid Slewing of Flexible Space Structures
2015-09-01
axis gimbal with elastic joints. The performance of the system can be enhanced by designing antenna maneuvers in which the flexible effects are...the effects of the nonlinearities so the vibrational motion can be constrained for a time-optimal slew. It is shown that by constructing an...joints. The performance of the system can be enhanced by designing antenna maneuvers in which the flexible effects are properly constrained, thus
Bechtle, Philip; Camargo-Molina, José Eliel; Desch, Klaus; ...
2016-02-24
We investigate the constrained Minimal Supersymmetric Standard Model (cMSSM) in the light of constraining experimental and observational data from precision measurements, astrophysics, direct supersymmetry searches at the LHC and measurements of the properties of the Higgs boson, by means of a global fit using the program Fittino. As in previous studies, we find rather poor agreement of the best fit point with the global data. We also investigate the stability of the electro-weak vacuum in the preferred region of parameter space around the best fit point.We find that the vacuum is metastable, with a lifetime significantly longer than the agemore » of the Universe. For the first time in a global fit of supersymmetry, we employ a consistent methodology to evaluate the goodness-of-fit of the cMSSM in a frequentist approach by deriving p values from large sets of toy experiments. We analyse analytically and quantitatively the impact of the choice of the observable set on the p value, and in particular its dilution when confronting the model with a large number of barely constraining measurements. Lastly, for the preferred sets of observables, we obtain p values for the cMSSM below 10 %, i.e. we exclude the cMSSM as a model at the 90 % confidence level.« less
Backes, Bradley J; Longenecker, Kenton; Hamilton, Gregory L; Stewart, Kent; Lai, Chunqiu; Kopecka, Hana; von Geldern, Thomas W; Madar, David J; Pei, Zhonghua; Lubben, Thomas H; Zinker, Bradley A; Tian, Zhenping; Ballaron, Stephen J; Stashko, Michael A; Mika, Amanda K; Beno, David W A; Kempf-Grote, Anita J; Black-Schaefer, Candace; Sham, Hing L; Trevillyan, James M
2007-04-01
A novel series of pyrrolidine-constrained phenethylamines were developed as dipeptidyl peptidase IV (DPP4) inhibitors for the treatment of type 2 diabetes. The cyclohexene ring of lead-like screening hit 5 was replaced with a pyrrolidine to enable parallel chemistry, and protein co-crystal structural data guided the optimization of N-substituents. Employing this strategy, a >400x improvement in potency over the initial hit was realized in rapid fashion. Optimized compounds are potent and selective inhibitors with excellent pharmacokinetic profiles. Compound 30 was efficacious in vivo, lowering blood glucose in ZDF rats that were allowed to feed freely on a mixed meal.
Statistical mechanics of budget-constrained auctions
NASA Astrophysics Data System (ADS)
Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.
2009-07-01
Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.
NASA Astrophysics Data System (ADS)
Liu, Yuan; Wang, Mingqiang; Ning, Xingyao
2018-02-01
Spinning reserve (SR) should be scheduled considering the balance between economy and reliability. To address the computational intractability cursed by the computation of loss of load probability (LOLP), many probabilistic methods use simplified formulations of LOLP to improve the computational efficiency. Two tradeoffs embedded in the SR optimization model are not explicitly analyzed in these methods. In this paper, two tradeoffs including primary tradeoff and secondary tradeoff between economy and reliability in the maximum LOLP constrained unit commitment (UC) model are explored and analyzed in a small system and in IEEE-RTS System. The analysis on the two tradeoffs can help in establishing new efficient simplified LOLP formulations and new SR optimization models.
NASA Astrophysics Data System (ADS)
Bloom, A. Anthony; Bowman, Kevin W.; Lee, Meemong; Turner, Alexander J.; Schroeder, Ronny; Worden, John R.; Weidner, Richard; McDonald, Kyle C.; Jacob, Daniel J.
2017-06-01
Wetland emissions remain one of the principal sources of uncertainty in the global atmospheric methane (CH4) budget, largely due to poorly constrained process controls on CH4 production in waterlogged soils. Process-based estimates of global wetland CH4 emissions and their associated uncertainties can provide crucial prior information for model-based top-down CH4 emission estimates. Here we construct a global wetland CH4 emission model ensemble for use in atmospheric chemical transport models (WetCHARTs version 1.0). Our 0.5° × 0.5° resolution model ensemble is based on satellite-derived surface water extent and precipitation reanalyses, nine heterotrophic respiration simulations (eight carbon cycle models and a data-constrained terrestrial carbon cycle analysis) and three temperature dependence parameterizations for the period 2009-2010; an extended ensemble subset based solely on precipitation and the data-constrained terrestrial carbon cycle analysis is derived for the period 2001-2015. We incorporate the mean of the full and extended model ensembles into GEOS-Chem and compare the model against surface measurements of atmospheric CH4; the model performance (site-level and zonal mean anomaly residuals) compares favourably against published wetland CH4 emissions scenarios. We find that uncertainties in carbon decomposition rates and the wetland extent together account for more than 80 % of the dominant uncertainty in the timing, magnitude and seasonal variability in wetland CH4 emissions, although uncertainty in the temperature CH4 : C dependence is a significant contributor to seasonal variations in mid-latitude wetland CH4 emissions. The combination of satellite, carbon cycle models and temperature dependence parameterizations provides a physically informed structural a priori uncertainty that is critical for top-down estimates of wetland CH4 fluxes. Specifically, our ensemble can provide enhanced information on the prior CH4 emission uncertainty and the error covariance structure, as well as a means for using posterior flux estimates and their uncertainties to quantitatively constrain the biogeochemical process controls of global wetland CH4 emissions.
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
Reinforcement learning solution for HJB equation arising in constrained optimal control problem.
Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong
2015-11-01
The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
A open loop guidance architecture for navigationally robust on-orbit docking
NASA Technical Reports Server (NTRS)
Chern, Hung-Sheng
1995-01-01
The development of an open-hop guidance architecture is outlined for autonomous rendezvous and docking (AR&D) missions to determine whether the Global Positioning System (GPS) can be used in place of optical sensors for relative initial position determination of the chase vehicle. Feasible command trajectories for one, two, and three impulse AR&D maneuvers are determined using constrained trajectory optimization. Early AR&D command trajectory results suggest that docking accuracies are most sensitive to vertical position errors at the initial conduction of the chase vehicle. Thus, a feasible command trajectory is based on maximizing the size of the locus of initial vertical positions for which a fixed sequence of impulses will translate the chase vehicle into the target while satisfying docking accuracy requirements. Documented accuracies are used to determine whether relative GPS can achieve the vertical position error requirements of the impulsive command trajectories. Preliminary development of a thruster management system for the Cargo Transfer Vehicle (CTV) based on optimal throttle settings is presented to complete the guidance architecture. Results show that a guidance architecture based on a two impulse maneuvers generated the best performance in terms of initial position error and total velocity change for the chase vehicle.
Zhang, Yong-Feng; Chiang, Hsiao-Dong
2017-09-01
A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.
Hatton, Leslie; Warr, Gregory
2015-01-01
That the physicochemical properties of amino acids constrain the structure, function and evolution of proteins is not in doubt. However, principles derived from information theory may also set bounds on the structure (and thus also the evolution) of proteins. Here we analyze the global properties of the full set of proteins in release 13-11 of the SwissProt database, showing by experimental test of predictions from information theory that their collective structure exhibits properties that are consistent with their being guided by a conservation principle. This principle (Conservation of Information) defines the global properties of systems composed of discrete components each of which is in turn assembled from discrete smaller pieces. In the system of proteins, each protein is a component, and each protein is assembled from amino acids. Central to this principle is the inter-relationship of the unique amino acid count and total length of a protein and its implications for both average protein length and occurrence of proteins with specific unique amino acid counts. The unique amino acid count is simply the number of distinct amino acids (including those that are post-translationally modified) that occur in a protein, and is independent of the number of times that the particular amino acid occurs in the sequence. Conservation of Information does not operate at the local level (it is independent of the physicochemical properties of the amino acids) where the influences of natural selection are manifest in the variety of protein structure and function that is well understood. Rather, this analysis implies that Conservation of Information would define the global bounds within which the whole system of proteins is constrained; thus it appears to be acting to constrain evolution at a level different from natural selection, a conclusion that appears counter-intuitive but is supported by the studies described herein.
On the Role of Hyper-arid Regions within the Virtual Water Trade Network
NASA Astrophysics Data System (ADS)
Aggrey, James; Alshamsi, Aamena; Molini, Annalisa
2016-04-01
Climate change, economic development, and population growth are bound to increasingly impact global water resources, posing a significant threat to the sustainable development of arid regions, where water consumption highly exceeds the natural carrying capacity, population growth rate is high, and climate variability is going to impact both water consumption and availability. Virtual Water Trade (VWT) - i.e. the international trade network of water-intensive products - has been proposed as a possible solution to optimize the allocation of water resources on the global scale. By increasing food availability and lowering food prices it may in fact help the rapid development of water-scarce regions. The structure of the VWT network has been analyzed by a number of authors both in connection with trade policies, socioeconomic constrains and agricultural efficiency. However a systematic analysis of the structure and the dynamics of the VWT network conditional to aridity, climatic forcing and energy availability, is still missing. Our goal is hence to analyze the role of arid and hyper-arid regions within the VWN under diverse climatic, demographic, and energy constraints with an aim to contribute to the ongoing Energy-Water-Food nexus discussion. In particular, we focus on the hyper-arid lands of the Arabian Peninsula, the role they play in the global network and the assessment of their specific criticalities, as reflected in the VWN resilience.
Application’s Method of Quadratic Programming for Optimization of Portfolio Selection
NASA Astrophysics Data System (ADS)
Kawamoto, Shigeru; Takamoto, Masanori; Kobayashi, Yasuhiro
Investors or fund-managers face with optimization of portfolio selection, which means that determine the kind and the quantity of investment among several brands. We have developed a method to obtain optimal stock’s portfolio more rapidly from twice to three times than conventional method with efficient universal optimization. The method is characterized by quadratic matrix of utility function and constrained matrices divided into several sub-matrices by focusing on structure of these matrices.
Robust fuel- and time-optimal control of uncertain flexible space structures
NASA Technical Reports Server (NTRS)
Wie, Bong; Sinha, Ravi; Sunkel, John; Cox, Ken
1993-01-01
The problem of computing open-loop, fuel- and time-optimal control inputs for flexible space structures in the face of modeling uncertainty is investigated. Robustified, fuel- and time-optimal pulse sequences are obtained by solving a constrained optimization problem subject to robustness constraints. It is shown that 'bang-off-bang' pulse sequences with a finite number of switchings provide a practical tradeoff among the maneuvering time, fuel consumption, and performance robustness of uncertain flexible space structures.
Fitting Prony Series To Data On Viscoelastic Materials
NASA Technical Reports Server (NTRS)
Hill, S. A.
1995-01-01
Improved method of fitting Prony series to data on viscoelastic materials involves use of least-squares optimization techniques. Based on optimization techniques yields closer correlation with data than traditional method. Involves no assumptions regarding the gamma'(sub i)s and higher-order terms, and provides for as many Prony terms as needed to represent higher-order subtleties in data. Curve-fitting problem treated as design-optimization problem and solved by use of partially-constrained-optimization techniques.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, J; Chao, M
2016-06-15
Purpose: To develop a novel strategy to extract the respiratory motion of the thoracic diaphragm from kilovoltage cone beam computed tomography (CBCT) projections by a constrained linear regression optimization technique. Methods: A parabolic function was identified as the geometric model and was employed to fit the shape of the diaphragm on the CBCT projections. The search was initialized by five manually placed seeds on a pre-selected projection image. Temporal redundancies, the enabling phenomenology in video compression and encoding techniques, inherent in the dynamic properties of the diaphragm motion together with the geometrical shape of the diaphragm boundary and the associatedmore » algebraic constraint that significantly reduced the searching space of viable parabolic parameters was integrated, which can be effectively optimized by a constrained linear regression approach on the subsequent projections. The innovative algebraic constraints stipulating the kinetic range of the motion and the spatial constraint preventing any unphysical deviations was able to obtain the optimal contour of the diaphragm with minimal initialization. The algorithm was assessed by a fluoroscopic movie acquired at anteriorposterior fixed direction and kilovoltage CBCT projection image sets from four lung and two liver patients. The automatic tracing by the proposed algorithm and manual tracking by a human operator were compared in both space and frequency domains. Results: The error between the estimated and manual detections for the fluoroscopic movie was 0.54mm with standard deviation (SD) of 0.45mm, while the average error for the CBCT projections was 0.79mm with SD of 0.64mm for all enrolled patients. The submillimeter accuracy outcome exhibits the promise of the proposed constrained linear regression approach to track the diaphragm motion on rotational projection images. Conclusion: The new algorithm will provide a potential solution to rendering diaphragm motion and ultimately improving tumor motion management for radiation therapy of cancer patients.« less
SU-E-T-551: Monitor Unit Optimization in Stereotactic Body Radiation Therapy for Stage I Lung Cancer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, B-T; Lu, J-Y
2015-06-15
Purpose: The study aims to reduce the monitor units (MUs) in the stereotactic body radiation therapy (SBRT) treatment for lung cancer by adjusting the optimizing parameters. Methods: Fourteen patients suffered from stage I Non-Small Cell Lung Cancer (NSCLC) were enrolled. Three groups of parameters were adjusted to investigate their effects on MU numbers and organs at risk (OARs) sparing: (1) the upper objective of planning target volume (UOPTV); (2) strength setting in the MU constraining objective; (3) max MU setting in the MU constraining objective. Results: We found that the parameters in the optimizer influenced the MU numbers in amore » priority, strength and max MU dependent manner. MU numbers showed a decreasing trend with the UOPTV increasing. MU numbers with low, medium and high priority for the UOPTV were 428±54, 312±48 and 258±31 MU/Gy, respectively. High priority for UOPTV also spared the heart, cord and lung while maintaining comparable PTV coverage than the low and medium priority group. It was observed that MU numbers tended to decrease with the strength increasing and max MU setting decreasing. With maximum strength, the MU numbers reached its minimum while maintaining comparable or improved dose to the normal tissues. It was also found that the MU numbers continued to decline at 85% and 75% max MU setting but no longer to decrease at 50% and 25%. Combined with high priority for UOPTV and MU constraining objectives, the MU numbers can be decreased as low as 223±26 MU/Gy. Conclusion:: The priority of UOPTV, MU constraining objective in the optimizer impact on the MU numbers in SBRT treatment for lung cancer. Giving high priority to the UOPTV, setting the strength to maximum value and the max MU to 50% in the MU objective achieves the lowest MU numbers while maintaining comparable or improved OAR sparing.« less
HRD Challenges Faced in the Post-Global Financial Crisis Period--Insights from the UK
ERIC Educational Resources Information Center
Keeble-Ramsay, Diane Rose; Armitage, Andrew
2015-01-01
Purpose: The paper aims to report initial empirical research that examines UK employees' perceptions of the changing nature of work since the Global Financial Crisis (GFC) to consider how the financial context may have constrained HRD practice and more sustainable approaches. Design/methodology/approach: Focus group research was facilitated…
ERIC Educational Resources Information Center
Seline, Richard
2006-01-01
Five trends are emerging that will not only change the role of human capital in the United States but will also challenge the legacy system of workforce development, skills and competency-focused institutions, and assuredly, community colleges. Workforce investment boards, for example, are currently geographically constrained in environments that…
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
NASA Astrophysics Data System (ADS)
Caldararu, S.; Smith, M. J.; Purves, D.; Emmott, S.
2013-12-01
Global agriculture will, in the future, be faced with two main challenges: climate change and an increase in global food demand driven by an increase in population and changes in consumption habits. To be able to predict both the impacts of changes in climate on crop yields and the changes in agricultural practices necessary to respond to such impacts we currently need to improve our understanding of crop responses to climate and the predictive capability of our models. Ideally, what we would have at our disposal is a modelling tool which, given certain climatic conditions and agricultural practices, can predict the growth pattern and final yield of any of the major crops across the globe. We present a simple, process-based crop growth model based on the assumption that plants allocate above- and below-ground biomass to maintain overall carbon optimality and that, to maintain this optimality, the reproductive stage begins at peak nitrogen uptake. The model includes responses to available light, water, temperature and carbon dioxide concentration as well as nitrogen fertilisation and irrigation. The model is data constrained at two sites, the Yaqui Valley, Mexico for wheat and the Southern Great Plains flux site for maize and soybean, using a robust combination of space-based vegetation data (including data from the MODIS and Landsat TM and ETM+ instruments), as well as ground-based biomass and yield measurements. We show a number of climate response scenarios, including increases in temperature and carbon dioxide concentrations as well as responses to irrigation and fertiliser application.
NASA Astrophysics Data System (ADS)
Park, Jong-Yeon; Stock, Charles A.; Yang, Xiaosong; Dunne, John P.; Rosati, Anthony; John, Jasmin; Zhang, Shaoqing
2018-03-01
Reliable estimates of historical and current biogeochemistry are essential for understanding past ecosystem variability and predicting future changes. Efforts to translate improved physical ocean state estimates into improved biogeochemical estimates, however, are hindered by high biogeochemical sensitivity to transient momentum imbalances that arise during physical data assimilation. Most notably, the breakdown of geostrophic constraints on data assimilation in equatorial regions can lead to spurious upwelling, resulting in excessive equatorial productivity and biogeochemical fluxes. This hampers efforts to understand and predict the biogeochemical consequences of El Niño and La Niña. We develop a strategy to robustly integrate an ocean biogeochemical model with an ensemble coupled-climate data assimilation system used for seasonal to decadal global climate prediction. Addressing spurious vertical velocities requires two steps. First, we find that tightening constraints on atmospheric data assimilation maintains a better equatorial wind stress and pressure gradient balance. This reduces spurious vertical velocities, but those remaining still produce substantial biogeochemical biases. The remainder is addressed by imposing stricter fidelity to model dynamics over data constraints near the equator. We determine an optimal choice of model-data weights that removed spurious biogeochemical signals while benefitting from off-equatorial constraints that still substantially improve equatorial physical ocean simulations. Compared to the unconstrained control run, the optimally constrained model reduces equatorial biogeochemical biases and markedly improves the equatorial subsurface nitrate concentrations and hypoxic area. The pragmatic approach described herein offers a means of advancing earth system prediction in parallel with continued data assimilation advances aimed at fully considering equatorial data constraints.
Solving LP Relaxations of Large-Scale Precedence Constrained Problems
NASA Astrophysics Data System (ADS)
Bienstock, Daniel; Zuckerberg, Mark
We describe new algorithms for solving linear programming relaxations of very large precedence constrained production scheduling problems. We present theory that motivates a new set of algorithmic ideas that can be employed on a wide range of problems; on data sets arising in the mining industry our algorithms prove effective on problems with many millions of variables and constraints, obtaining provably optimal solutions in a few minutes of computation.
NASA Technical Reports Server (NTRS)
Morgenthaler, George W.; Glover, Fred W.; Woodcock, Gordon R.; Laguna, Manuel
2005-01-01
The 1/14/04 USA Space Exploratiofltilization Initiative invites all Space-faring Nations, all Space User Groups in Science, Space Entrepreneuring, Advocates of Robotic and Human Space Exploration, Space Tourism and Colonization Promoters, etc., to join an International Space Partnership. With more Space-faring Nations and Space User Groups each year, such a Partnership would require Multi-year (35 yr.-45 yr.) Space Mission Planning. With each Nation and Space User Group demanding priority for its missions, one needs a methodology for obiectively selecting the best mission sequences to be added annually to this 45 yr. Moving Space Mission Plan. How can this be done? Planners have suggested building a Reusable, Sustainable, Space Transportation Infrastructure (RSSn) to increase Mission synergism, reduce cost, and increase scientific and societal returns from this Space Initiative. Morgenthaler and Woodcock presented a Paper at the 55th IAC, Vancouver B.C., Canada, entitled Constrained Optimization Models For Optimizing Multi - Year Space Programs. This Paper showed that a Binary Integer Programming (BIP) Constrained Optimization Model combined with the NASA ATLAS Cost and Space System Operational Parameter Estimating Model has the theoretical capability to solve such problems. IAA Commission III, Space Technology and Space System Development, in its ACADEMY DAY meeting at Vancouver, requested that the Authors and NASA experts find several Space Exploration Architectures (SEAS), apply the combined BIP/ATLAS Models, and report the results at the 56th Fukuoka IAC. While the mathematical Model is in Ref.[2] this Paper presents the Application saga of that effort.
Ostrander, Chadlin M.; Owens, Jeremy D.; Nielsen, Sune G.
2017-01-01
The rates of marine deoxygenation leading to Cretaceous Oceanic Anoxic Events are poorly recognized and constrained. If increases in primary productivity are the primary driver of these episodes, progressive oxygen loss from global waters should predate enhanced carbon burial in underlying sediments—the diagnostic Oceanic Anoxic Event relic. Thallium isotope analysis of organic-rich black shales from Demerara Rise across Oceanic Anoxic Event 2 reveals evidence of expanded sediment-water interface deoxygenation ~43 ± 11 thousand years before the globally recognized carbon cycle perturbation. This evidence for rapid oxygen loss leading to an extreme ancient climatic event has timely implications for the modern ocean, which is already experiencing large-scale deoxygenation. PMID:28808684
Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering
NASA Technical Reports Server (NTRS)
Tilton, James C.
2002-01-01
This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.
Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun
2016-01-01
In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550
NASA Astrophysics Data System (ADS)
Bacour, C.; Maignan, F.; Porcar-Castell, A.; MacBean, N.; Goulas, Y.; Flexas, J.; Guanter, L.; Joiner, J.; Peylin, P.
2016-12-01
A new era for improving our knowledge of the terrestrial carbon cycle at the global scale has begun with recent studies on the relationships between remotely sensed Sun Induce Fluorescence (SIF) and plant photosynthetic activity (GPP), and the availability of such satellite-derived products now "routinely" produced from GOSAT, GOME-2, or OCO-2 observations. Assimilating SIF data into terrestrial ecosystem models (TEMs) represents a novel opportunity to reduce the uncertainty of their prediction with respect to carbon-climate feedbacks, in particular the uncertainties resulting from inaccurate parameter values. A prerequisite is a correct representation in TEMs of the several drivers of plant fluorescence from the leaf to the canopy scale, and in particular the competing processes of photochemistry and non photochemical quenching (NPQ).In this study, we present the first results of a global scale assimilation of GOME-2 SIF products within a new version of the ORCHIDEE land surface model including a physical module of plant fluorescence. At the leaf level, the regulation of fluorescence yield is simulated both by the photosynthesis module of ORCHIDEE to calculate the photochemical yield and by a parametric model to estimate NPQ. The latter has been calibrated on leaf fluorescence measurements performed for boreal coniferous and Mediterranean vegetation species. A parametric representation of the SCOPE radiative transfer model is used to model the plant fluorescence fluxes for PSI and PSII and the scaling up to the canopy level. The ORCHIDEE-FluOR model is firstly evaluated with respect to in situ measurements of plant fluorescence flux and photochemical yield for scots pine and wheat. The potentials of SIF data to constrain the modelled GPP are evaluated by assimilating one year of GOME-2-SIF products within ORCHIDEE-FluOR. We investigate in particular the changes in the spatial patterns of GPP following the optimization of the photosynthesis and phenology parameters. We analyze the differences obtained using a simpler fluorescence model in ORCHIDEE hypothesizing a linear relationship between SIF and GPP, and an independent simultaneous assimilation of three data-streams (in situ flux measurements, satellite derived NDVI and atmospheric CO2 concentrations).
Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations
Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad
2013-01-01
Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194
Fan, Quan-Yong; Yang, Guang-Hong
2017-01-01
The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birge, J. R.; Qi, L.; Wei, Z.
In this paper we give a variant of the Topkis-Veinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a Fritz-John point of the problem. We introduce a Fritz-John (FJ) function, an FJ1 strong second-order sufficiency condition (FJ1-SSOSC), and an FJ2 strong second-order sufficiency condition (FJ2-SSOSC), and then show, without any constraint qualification (CQ), that (i) ifmore » an FJ point z satisfies the FJ1-SSOSC, then there exists a neighborhood N(z) of z such that, for any FJ point y element of N(z) {l_brace}z {r_brace} , f{sub 0}(y) {ne} f{sub 0}(z) , where f{sub 0} is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2-SSOSC, then z is a strict local minimum of the problem. The result (i) implies that the entire iteration point sequence generated by the method converges to an FJ point. We also show that if the parameters are chosen large enough, a unit step length can be accepted by the proposed algorithm.« less
Nallasivam, Ulaganathan; Shah, Vishesh H.; Shenvi, Anirudh A.; ...
2016-02-10
We present a general Global Minimization Algorithm (GMA) to identify basic or thermally coupled distillation configurations that require the least vapor duty under minimum reflux conditions for separating any ideal or near-ideal multicomponent mixture into a desired number of product streams. In this algorithm, global optimality is guaranteed by modeling the system using Underwood equations and reformulating the resulting constraints to bilinear inequalities. The speed of convergence to the globally optimal solution is increased by using appropriate feasibility and optimality based variable-range reduction techniques and by developing valid inequalities. As a result, the GMA can be coupled with already developedmore » techniques that enumerate basic and thermally coupled distillation configurations, to provide for the first time, a global optimization based rank-list of distillation configurations.« less
Improving Free-Piston Stirling Engine Specific Power
NASA Technical Reports Server (NTRS)
Briggs, Maxwell Henry
2014-01-01
This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling Engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.
Improving Free-Piston Stirling Engine Specific Power
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.
2015-01-01
This work uses analytical methods to demonstrate the potential benefits of optimizing piston and/or displacer motion in a Stirling engine. Isothermal analysis was used to show the potential benefits of ideal motion in ideal Stirling engines. Nodal analysis is used to show that ideal piston and displacer waveforms are not optimal in real Stirling engines. Constrained optimization was used to identify piston and displacer waveforms that increase Stirling engine specific power.
Morris, Melody K.; Saez-Rodriguez, Julio; Lauffenburger, Douglas A.; Alexopoulos, Leonidas G.
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms. PMID:23226239
NASA Astrophysics Data System (ADS)
Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao
2017-02-01
Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.
Mitsos, Alexander; Melas, Ioannis N; Morris, Melody K; Saez-Rodriguez, Julio; Lauffenburger, Douglas A; Alexopoulos, Leonidas G
2012-01-01
Modeling of signal transduction pathways plays a major role in understanding cells' function and predicting cellular response. Mathematical formalisms based on a logic formalism are relatively simple but can describe how signals propagate from one protein to the next and have led to the construction of models that simulate the cells response to environmental or other perturbations. Constrained fuzzy logic was recently introduced to train models to cell specific data to result in quantitative pathway models of the specific cellular behavior. There are two major issues in this pathway optimization: i) excessive CPU time requirements and ii) loosely constrained optimization problem due to lack of data with respect to large signaling pathways. Herein, we address both issues: the former by reformulating the pathway optimization as a regular nonlinear optimization problem; and the latter by enhanced algorithms to pre/post-process the signaling network to remove parts that cannot be identified given the experimental conditions. As a case study, we tackle the construction of cell type specific pathways in normal and transformed hepatocytes using medium and large-scale functional phosphoproteomic datasets. The proposed Non Linear Programming (NLP) formulation allows for fast optimization of signaling topologies by combining the versatile nature of logic modeling with state of the art optimization algorithms.
A general-purpose optimization program for engineering design
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.; Sugimoto, H.
1986-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis) is a FORTRAN program for nonlinear constrained (or unconstrained) function minimization. The optimization process is segmented into three levels: Strategy, Optimizer, and One-dimensional search. At each level, several options are available so that a total of nearly 100 possible combinations can be created. An example of available combinations is the Augmented Lagrange Multiplier method, using the BFGS variable metric unconstrained minimization together with polynomial interpolation for the one-dimensional search.
NASA Astrophysics Data System (ADS)
Schürmann, Gregor J.; Kaminski, Thomas; Köstler, Christoph; Carvalhais, Nuno; Voßbeck, Michael; Kattge, Jens; Giering, Ralf; Rödenbeck, Christian; Heimann, Martin; Zaehle, Sönke
2016-09-01
We describe the Max Planck Institute Carbon Cycle Data Assimilation System (MPI-CCDAS) built around the tangent-linear version of the JSBACH land-surface scheme, which is part of the MPI-Earth System Model v1. The simulated phenology and net land carbon balance were constrained by globally distributed observations of the fraction of absorbed photosynthetically active radiation (FAPAR, using the TIP-FAPAR product) and atmospheric CO2 at a global set of monitoring stations for the years 2005 to 2009. When constrained by FAPAR observations alone, the system successfully, and computationally efficiently, improved simulated growing-season average FAPAR, as well as its seasonality in the northern extra-tropics. When constrained by atmospheric CO2 observations alone, global net and gross carbon fluxes were improved, despite a tendency of the system to underestimate tropical productivity. Assimilating both data streams jointly allowed the MPI-CCDAS to match both observations (TIP-FAPAR and atmospheric CO2) equally well as the single data stream assimilation cases, thereby increasing the overall appropriateness of the simulated biosphere dynamics and underlying parameter values. Our study thus demonstrates the value of multiple-data-stream assimilation for the simulation of terrestrial biosphere dynamics. It further highlights the potential role of remote sensing data, here the TIP-FAPAR product, in stabilising the strongly underdetermined atmospheric inversion problem posed by atmospheric transport and CO2 observations alone. Notwithstanding these advances, the constraint of the observations on regional gross and net CO2 flux patterns on the MPI-CCDAS is limited through the coarse-scale parametrisation of the biosphere model. We expect improvement through a refined initialisation strategy and inclusion of further biosphere observations as constraints.
Determination of the Conservation Time of Periodicals for Optimal Shelf Maintenance of a Library.
ERIC Educational Resources Information Center
Miyamoto, Sadaaki; Nakayama, Kazuhiko
1981-01-01
Presents a method based on a constrained optimization technique that determines the time of removal of scientific periodicals from the shelf of a library. A geometrical interpretation of the theoretical result is given, and a numerical example illustrates how the technique is applicable to real bibliographic data. (FM)
A chance constraint estimation approach to optimizing resource management under uncertainty
Michael Bevers
2007-01-01
Chance-constrained optimization is an important method for managing risk arising from random variations in natural resource systems, but the probabilistic formulations often pose mathematical programming problems that cannot be solved with exact methods. A heuristic estimation method for these problems is presented that combines a formulation for order statistic...
Optimal Chebyshev polynomials on ellipses in the complex plane
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Freund, Roland
1989-01-01
The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems. PMID:25143977
Jiang, Yuyi; Shao, Zhiqing; Guo, Yi
2014-01-01
A complex computing problem can be solved efficiently on a system with multiple computing nodes by dividing its implementation code into several parallel processing modules or tasks that can be formulated as directed acyclic graph (DAG) problems. The DAG jobs may be mapped to and scheduled on the computing nodes to minimize the total execution time. Searching an optimal DAG scheduling solution is considered to be NP-complete. This paper proposed a tuple molecular structure-based chemical reaction optimization (TMSCRO) method for DAG scheduling on heterogeneous computing systems, based on a very recently proposed metaheuristic method, chemical reaction optimization (CRO). Comparing with other CRO-based algorithms for DAG scheduling, the design of tuple reaction molecular structure and four elementary reaction operators of TMSCRO is more reasonable. TMSCRO also applies the concept of constrained critical paths (CCPs), constrained-critical-path directed acyclic graph (CCPDAG) and super molecule for accelerating convergence. In this paper, we have also conducted simulation experiments to verify the effectiveness and efficiency of TMSCRO upon a large set of randomly generated graphs and the graphs for real world problems.
On the optimization of electromagnetic geophysical data: Application of the PSO algorithm
NASA Astrophysics Data System (ADS)
Godio, A.; Santilano, A.
2018-01-01
Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.
Microgrid Optimal Scheduling With Chance-Constrained Islanding Capability
Liu, Guodong; Starke, Michael R.; Xiao, B.; ...
2017-01-13
To facilitate the integration of variable renewable generation and improve the resilience of electricity sup-ply in a microgrid, this paper proposes an optimal scheduling strategy for microgrid operation considering constraints of islanding capability. A new concept, probability of successful islanding (PSI), indicating the probability that a microgrid maintains enough spinning reserve (both up and down) to meet local demand and accommodate local renewable generation after instantaneously islanding from the main grid, is developed. The PSI is formulated as mixed-integer linear program using multi-interval approximation taking into account the probability distributions of forecast errors of wind, PV and load. With themore » goal of minimizing the total operating cost while preserving user specified PSI, a chance-constrained optimization problem is formulated for the optimal scheduling of mirogrids and solved by mixed integer linear programming (MILP). Numerical simulations on a microgrid consisting of a wind turbine, a PV panel, a fuel cell, a micro-turbine, a diesel generator and a battery demonstrate the effectiveness of the proposed scheduling strategy. Lastly, we verify the relationship between PSI and various factors.« less
NASA Astrophysics Data System (ADS)
Sarghini, Fabrizio; De Vivo, Angela; Marra, Francesco
2017-10-01
Computational science and engineering methods have allowed a major change in the way products and processes are designed, as validated virtual models - capable to simulate physical, chemical and bio changes occurring during production processes - can be realized and used in place of real prototypes and performing experiments, often time and money consuming. Among such techniques, Optimal Shape Design (OSD) (Mohammadi & Pironneau, 2004) represents an interesting approach. While most classical numerical simulations consider fixed geometrical configurations, in OSD a certain number of geometrical degrees of freedom is considered as a part of the unknowns: this implies that the geometry is not completely defined, but part of it is allowed to move dynamically in order to minimize or maximize the objective function. The applications of optimal shape design (OSD) are uncountable. For systems governed by partial differential equations, they range from structure mechanics to electromagnetism and fluid mechanics or to a combination of the three. This paper presents one of possible applications of OSD, particularly how extrusion bell shape, for past production, can be designed by applying a multivariate constrained shape optimization.
Trajectory optimization for the National aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1993-01-01
While continuing the application of the inverse dynamics approach in obtaining the optimal numerical solutions, the research during the past six months has been focused on the formulation and derivation of closed-form solutions for constrained hypersonic flight trajectories. Since it was found in the research of the first year that a dominant portion of the optimal ascent trajectory of the aerospace plane is constrained by dynamic pressure and heating constraints, the application of the analytical solutions significantly enhances the efficiency in trajectory optimization, provides a better insight to understanding of the trajectory and conceivably has great potential in guidance of the vehicle. Work of this period has been reported in four technical papers. Two of the papers were presented in the AIAA Guidance, Navigation, and Control Conference (Hilton Head, SC, August, 1992) and Fourth International Aerospace Planes Conference (Orlando, FL, December, 1992). The other two papers have been accepted for publication by Journal of Guidance, Control, and Dynamics, and will appear in 1993. This report briefly summarizes the work done in the past six months and work currently underway.
Digital robust control law synthesis using constrained optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivekananda
1989-01-01
Development of digital robust control laws for active control of high performance flexible aircraft and large space structures is a research area of significant practical importance. The flexible system is typically modeled by a large order state space system of equations in order to accurately represent the dynamics. The active control law must satisy multiple conflicting design requirements and maintain certain stability margins, yet should be simple enough to be implementable on an onboard digital computer. Described here is an application of a generic digital control law synthesis procedure for such a system, using optimal control theory and constrained optimization technique. A linear quadratic Gaussian type cost function is minimized by updating the free parameters of the digital control law, while trying to satisfy a set of constraints on the design loads, responses and stability margins. Analytical expressions for the gradients of the cost function and the constraints with respect to the control law design variables are used to facilitate rapid numerical convergence. These gradients can be used for sensitivity study and may be integrated into a simultaneous structure and control optimization scheme.
Joint Chance-Constrained Dynamic Programming
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Kuwata, Yoshiaki; Balaram, J. Bob
2012-01-01
This paper presents a novel dynamic programming algorithm with a joint chance constraint, which explicitly bounds the risk of failure in order to maintain the state within a specified feasible region. A joint chance constraint cannot be handled by existing constrained dynamic programming approaches since their application is limited to constraints in the same form as the cost function, that is, an expectation over a sum of one-stage costs. We overcome this challenge by reformulating the joint chance constraint into a constraint on an expectation over a sum of indicator functions, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the primal variables can be optimized by a standard dynamic programming, while the dual variable is optimized by a root-finding algorithm that converges exponentially. Error bounds on the primal and dual objective values are rigorously derived. We demonstrate the algorithm on a path planning problem, as well as an optimal control problem for Mars entry, descent and landing. The simulations are conducted using a real terrain data of Mars, with four million discrete states at each time step.
A formulation and analysis of combat games
NASA Technical Reports Server (NTRS)
Heymann, M.; Ardema, M. D.; Rajan, N.
1985-01-01
Combat is formulated as a dynamical encounter between two opponents, each of whom has offensive capabilities and objectives. With each opponent is associated a target in the event space in which he endeavors to terminate the combat, thereby winning. If the combat terminates in both target sets simultaneously or in neither, a joint capture or a draw, respectively, is said to occur. Resolution of the encounter is formulated as a combat game; namely, as a pair of competing event-constrained differential games. If exactly one of the players can win, the optimal strategies are determined from a resulting constrained zero-sum differential game. Otherwise the optimal strategies are computed from a resulting non-zero-sum game. Since optimal combat strategies frequencies may not exist, approximate of delta-combat games are also formulated leading to approximate or delta-optimal strategies. To illustrate combat games, an example, called the turret game, is considered. This game may be thought of as a highly simplified model of air combat, yet it is sufficiently complex to exhibit a rich variety of combat behavior, much of which is not found in pursuit-evasion games.
Role of slack variables in quasi-Newton methods for constrained optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tapia, R.A.
In constrained optimization the technique of converting an inequality constraint into an equality constraint by the addition of a squared slack variable is well known but rarely used. In choosing an active constraint philosophy over the slack variable approach, researchers quickly justify their choice with the standard criticisms: the slack variable approach increases the dimension of the problem, is numerically unstable, and gives rise to singular systems. It is shown that these criticisms of the slack variable approach need not apply and the two seemingly distinct approaches are actually very closely related. In fact, the squared slack variable formulation canmore » be used to develop a superior and more comprehensive active constraint philosophy.« less
NASA Astrophysics Data System (ADS)
Li, Guang
2017-01-01
This paper presents a fast constrained optimization approach, which is tailored for nonlinear model predictive control of wave energy converters (WEC). The advantage of this approach relies on its exploitation of the differential flatness of the WEC model. This can reduce the dimension of the resulting nonlinear programming problem (NLP) derived from the continuous constrained optimal control of WEC using pseudospectral method. The alleviation of computational burden using this approach helps to promote an economic implementation of nonlinear model predictive control strategy for WEC control problems. The method is applicable to nonlinear WEC models, nonconvex objective functions and nonlinear constraints, which are commonly encountered in WEC control problems. Numerical simulations demonstrate the efficacy of this approach.
Missile Guidance Law Based on Robust Model Predictive Control Using Neural-Network Optimization.
Li, Zhijun; Xia, Yuanqing; Su, Chun-Yi; Deng, Jun; Fu, Jun; He, Wei
2015-08-01
In this brief, the utilization of robust model-based predictive control is investigated for the problem of missile interception. Treating the target acceleration as a bounded disturbance, novel guidance law using model predictive control is developed by incorporating missile inside constraints. The combined model predictive approach could be transformed as a constrained quadratic programming (QP) problem, which may be solved using a linear variational inequality-based primal-dual neural network over a finite receding horizon. Online solutions to multiple parametric QP problems are used so that constrained optimal control decisions can be made in real time. Simulation studies are conducted to illustrate the effectiveness and performance of the proposed guidance control law for missile interception.
Effect of leading-edge load constraints on the design and performance of supersonic wings
NASA Technical Reports Server (NTRS)
Darden, C. M.
1985-01-01
A theoretical and experimental investigation was conducted to assess the effect of leading-edge load constraints on supersonic wing design and performance. In the effort to delay flow separation and the formation of leading-edge vortices, two constrained, linear-theory optimization approaches were used to limit the loadings on the leading edge of a variable-sweep planform design. Experimental force and moment tests were made on two constrained camber wings, a flat uncambered wing, and an optimum design with no constraints. Results indicate that vortex strength and separation regions were mildest on the severely and moderately constrained wings.
LDRD Final Report: Global Optimization for Engineering Science Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
HART,WILLIAM E.
1999-12-01
For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.
Mdluli, Thembi; Buzzard, Gregery T; Rundell, Ann E
2015-09-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm's scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements.
Mdluli, Thembi; Buzzard, Gregery T.; Rundell, Ann E.
2015-01-01
This model-based design of experiments (MBDOE) method determines the input magnitudes of an experimental stimuli to apply and the associated measurements that should be taken to optimally constrain the uncertain dynamics of a biological system under study. The ideal global solution for this experiment design problem is generally computationally intractable because of parametric uncertainties in the mathematical model of the biological system. Others have addressed this issue by limiting the solution to a local estimate of the model parameters. Here we present an approach that is independent of the local parameter constraint. This approach is made computationally efficient and tractable by the use of: (1) sparse grid interpolation that approximates the biological system dynamics, (2) representative parameters that uniformly represent the data-consistent dynamical space, and (3) probability weights of the represented experimentally distinguishable dynamics. Our approach identifies data-consistent representative parameters using sparse grid interpolants, constructs the optimal input sequence from a greedy search, and defines the associated optimal measurements using a scenario tree. We explore the optimality of this MBDOE algorithm using a 3-dimensional Hes1 model and a 19-dimensional T-cell receptor model. The 19-dimensional T-cell model also demonstrates the MBDOE algorithm’s scalability to higher dimensions. In both cases, the dynamical uncertainty region that bounds the trajectories of the target system states were reduced by as much as 86% and 99% respectively after completing the designed experiments in silico. Our results suggest that for resolving dynamical uncertainty, the ability to design an input sequence paired with its associated measurements is particularly important when limited by the number of measurements. PMID:26379275
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niu, S; Zhang, Y; Ma, J
Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
NASA Astrophysics Data System (ADS)
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
An Efficient Augmented Lagrangian Method with Applications to Total Variation Minimization
2012-08-17
the classic augmented Lagrangian multiplier method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth...method, we propose, analyze and test an algorithm for solving a class of equality-constrained non-smooth optimization problems (chie y but not...significantly outperforming several state-of-the-art solvers on most tested problems. The resulting MATLAB solver, called TVAL3, has been posted online [23]. 2
NASA Astrophysics Data System (ADS)
Franklin, Oskar; Han, Wang; Dieckmann, Ulf; Cramer, Wolfgang; Brännström, Åke; Pietsch, Stephan; Rovenskaya, Elena; Prentice, Iain Colin
2017-04-01
Dynamic global vegetation models (DGVMs) are now indispensable for understanding the biosphere and for estimating the capacity of ecosystems to provide services. The models are continuously developed to include an increasing number of processes and to utilize the growing amounts of observed data becoming available. However, while the versatility of the models is increasing as new processes and variables are added, their accuracy suffers from the accumulation of uncertainty, especially in the absence of overarching principles controlling their concerted behaviour. We have initiated a collaborative working group to address this problem based on a 'missing law' - adaptation and optimization principles rooted in natural selection. Even though this 'missing law' constrains relationships between traits, and therefore can vastly reduce the number of uncertain parameters in ecosystem models, it has rarely been applied to DGVMs. Our recent research have shown that optimization- and trait-based models of gross primary production can be both much simpler and more accurate than current models based on fixed functional types, and that observed plant carbon allocations and distributions of plant functional traits are predictable with eco-evolutionary models. While there are also many other examples of the usefulness of these and other theoretical principles, it is not always straight-forward to make them operational in predictive models. In particular on longer time scales, the representation of functional diversity and the dynamical interactions among individuals and species presents a formidable challenge. Here we will present recent ideas on the use of adaptation and optimization principles in vegetation models, including examples of promising developments, but also limitations of the principles and some key challenges.
Investigation of the N2O emission strength in the U. S. Corn Belt
NASA Astrophysics Data System (ADS)
Fu, Congsheng; Lee, Xuhui; Griffis, Timothy J.; Dlugokencky, Edward J.; Andrews, Arlyn E.
2017-09-01
Nitrous oxide (N2O) has a high global warming potential and depletes stratospheric ozone. The U. S. Corn Belt plays an important role in the global anthropogenic N2O budget. To date, studies on local surface N2O emissions and the atmospheric N2O budget have commonly used Lagrangian models. In the present study, we used an Eulerian model - Weather Research and Forecasting Chemistry (WRF-Chem) model to investigate the relationships between N2O emissions in the Corn Belt and observed atmospheric N2O mixing ratios. We derived a simple equation to relate the emission strengths to atmospheric N2O mixing ratios, and used the derived equation and hourly atmospheric N2O measurements at the KCMP tall tower in Minnesota to constrain agricultural N2O emissions. The modeled spatial patterns of atmospheric N2O were evaluated against discrete observations at multiple tall towers in the NOAA flask network. After optimization of the surface flux, the model reproduced reasonably well the hourly N2O mixing ratios monitored at the KCMP tower. Agricultural N2O emissions in the EDGAR42 database needed to be scaled up by 19.0 to 28.1 fold to represent the true emissions in the Corn Belt for June 1-20, 2010 - a peak emission period. Optimized mean N2O emissions were 3.00-4.38, 1.52-2.08, 0.61-0.81 and 0.56-0.75 nmol m- 2 s- 1 for June 1-20, August 1-20, October 1-20 and December 1-20, 2010, respectively. The simulated spatial patterns of atmospheric N2O mixing ratios after optimization were in good agreement with the NOAA discrete observations during the strong emission peak in June. Such spatial patterns suggest that the underestimate of emissions using IPCC (Inter-governmental Panel on Climate Change) inventory methodology is not dependent on tower measurement location.
SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models
NASA Astrophysics Data System (ADS)
Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.
2013-12-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.
Parametric study of a canard-configured transport using conceptual design optimization
NASA Technical Reports Server (NTRS)
Arbuckle, P. D.; Sliwa, S. M.
1985-01-01
Constrained-parameter optimization is used to perform optimal conceptual design of both canard and conventional configurations of a medium-range transport. A number of design constants and design constraints are systematically varied to compare the sensitivities of canard and conventional configurations to a variety of technology assumptions. Main-landing-gear location and canard surface high-lift performance are identified as critical design parameters for a statically stable, subsonic, canard-configured transport.
A robust approach to chance constrained optimal power flow with renewable generation
Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.
2016-09-01
Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less
Kim, Nam-Hoon; Hwang, Jin Hwan; Cho, Jaegab; Kim, Jae Seong
2018-06-04
The characteristics of an estuary are determined by various factors as like as tide, wave, river discharge, etc. which also control the water quality of the estuary. Therefore, detecting the changes of characteristics is critical in managing the environmental qualities and pollution and so the locations of monitoring should be selected carefully. The present study proposes a framework to deploy the monitoring systems based on a graphical method of the spatial and temporal optimizations. With the well-validated numerical simulation results, the monitoring locations are determined to capture the changes of water qualities and pollutants depending on the variations of tide, current and freshwater discharge. The deployment strategy to find the appropriate monitoring locations is designed with the constrained optimization method, which finds solutions by constraining the objective function into the feasible regions. The objective and constrained functions are constructed with the interpolation technique such as objective analysis. Even with the smaller number of the monitoring locations, the present method performs well equivalently to the arbitrarily and evenly deployed monitoring system. Copyright © 2018 Elsevier Ltd. All rights reserved.
GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING
Liu, Hongcheng; Yao, Tao; Li, Runze
2015-01-01
This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126
Optimizing Experimental Designs Relative to Costs and Effect Sizes.
ERIC Educational Resources Information Center
Headrick, Todd C.; Zumbo, Bruno D.
A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…
Constrained Optimization Problems in Cost and Managerial Accounting--Spreadsheet Tools
ERIC Educational Resources Information Center
Amlie, Thomas T.
2009-01-01
A common problem addressed in Managerial and Cost Accounting classes is that of selecting an optimal production mix given scarce resources. That is, if a firm produces a number of different products, and is faced with scarce resources (e.g., limitations on labor, materials, or machine time), what combination of products yields the greatest profit…
NASA Technical Reports Server (NTRS)
Padovan, J.; Lackney, J.
1986-01-01
The current paper develops a constrained hierarchical least square nonlinear equation solver. The procedure can handle the response behavior of systems which possess indefinite tangent stiffness characteristics. Due to the generality of the scheme, this can be achieved at various hierarchical application levels. For instance, in the case of finite element simulations, various combinations of either degree of freedom, nodal, elemental, substructural, and global level iterations are possible. Overall, this enables a solution methodology which is highly stable and storage efficient. To demonstrate the capability of the constrained hierarchical least square methodology, benchmarking examples are presented which treat structure exhibiting highly nonlinear pre- and postbuckling behavior wherein several indefinite stiffness transitions occur.
Joint Resource Optimization for Cognitive Sensor Networks with SWIPT-Enabled Relay.
Lu, Weidang; Lin, Yuanrong; Peng, Hong; Nan, Tian; Liu, Xin
2017-09-13
Energy-constrained wireless networks, such as wireless sensor networks (WSNs), are usually powered by fixed energy supplies (e.g., batteries), which limits the operation time of networks. Simultaneous wireless information and power transfer (SWIPT) is a promising technique to prolong the lifetime of energy-constrained wireless networks. This paper investigates the performance of an underlay cognitive sensor network (CSN) with SWIPT-enabled relay node. In the CSN, the amplify-and-forward (AF) relay sensor node harvests energy from the ambient radio-frequency (RF) signals using power splitting-based relaying (PSR) protocol. Then, it helps forward the signal of source sensor node (SSN) to the destination sensor node (DSN) by using the harvested energy. We study the joint resource optimization including the transmit power and power splitting ratio to maximize CSN's achievable rate with the constraint that the interference caused by the CSN to the primary users (PUs) is within the permissible threshold. Simulation results show that the performance of our proposed joint resource optimization can be significantly improved.
Chance-Constrained AC Optimal Power Flow: Reformulations and Efficient Algorithms
Roald, Line Alnaes; Andersson, Goran
2017-08-29
Higher levels of renewable electricity generation increase uncertainty in power system operation. To ensure secure system operation, new tools that account for this uncertainty are required. Here, in this paper, we adopt a chance-constrained AC optimal power flow formulation, which guarantees that generation, power flows and voltages remain within their bounds with a pre-defined probability. We then discuss different chance-constraint reformulations and solution approaches for the problem. Additionally, we first discuss an analytical reformulation based on partial linearization, which enables us to obtain a tractable representation of the optimization problem. We then provide an efficient algorithm based on an iterativemore » solution scheme which alternates between solving a deterministic AC OPF problem and assessing the impact of uncertainty. This more flexible computational framework enables not only scalable implementations, but also alternative chance-constraint reformulations. In particular, we suggest two sample based reformulations that do not require any approximation or relaxation of the AC power flow equations.« less
Mini-batch optimized full waveform inversion with geological constrained gradient filtering
NASA Astrophysics Data System (ADS)
Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai
2018-05-01
High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.
Mixed Integer Programming and Heuristic Scheduling for Space Communication Networks
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2012-01-01
In this paper, we propose to solve the constrained optimization problem in two phases. The first phase uses heuristic methods such as the ant colony method, particle swarming optimization, and genetic algorithm to seek a near optimal solution among a list of feasible initial populations. The final optimal solution can be found by using the solution of the first phase as the initial condition to the SQP algorithm. We demonstrate the above problem formulation and optimization schemes with a large-scale network that includes the DSN ground stations and a number of spacecraft of deep space missions.
Evans, Jonathan; Swart, Marelize; Soko, Nyarai; Wonkam, Ambroise; Huzair, Farah
2015-01-01
Abstract The use of pharmacogenomics (PGx) knowledge in treatment of individual patients is becoming a common phenomenon in the developed world. However, poorly resourced countries have thus far been constrained for three main reasons. First, the cost of whole genome sequencing is still considerably high in comparison to other (non-genomics) diagnostics in the developing world where both science and social dynamics create a dynamic and fragile healthcare ecosystem. Second, studies correlating genomic differences with drug pharmacokinetics and pharmacodynamics have not been consistent, and more importantly, often not indexed to impact on societal end-points, beyond clinical practice. Third, ethics regulatory frames over PGx testing require improvements based on nested accountability systems and in ways that address the user community needs. Thus, CYP2B6 is a crucial enzyme in the metabolism of antiretroviral drugs, efavirenz and nevirapine. More than 40 genetic variants have been reported, but only a few contribute to differences in plasma EFV and NVP concentrations. The most widely reported CYP2B6 variants affecting plasma drug levels include c.516G>T, c.983T>C, and to a lesser extent, g.15582C>T, which should be considered in future PGx tests. While the first two variants are easily characterized, the g.15582C>T detection has been performed primarily by sequencing, which is costly, labor intensive, and requires access to barely available expertise in the developing world. We report here on a simple, practical PCR-RFLP method with vast potentials for use in resource-constrained world regions to detect the g.15582C>T variation among South African and Cameroonian persons. The effects of CYP2B6 g.15582C>T on plasma EFV concentration were further evaluated among HIV/AIDS patients. We report no differences in the frequency of the g.15582T variant between the South African (0.08) and Cameroonian (0.06) groups, which are significantly lower than reported in Asians (0.39) and Caucasians (0.31). The g.15582C/T and T/T genotypes were associated with significantly reduced EFV levels (p=0.006). This article additionally presents the policy relevance of the PGX global health diagnostics and therefore, collectively makes an original interdisciplinary contribution to the field of integrative biology and personalized medicine in developing world. Such studies are, in fact, broadly important because resource-constrained regions exist not only in developing world but also in major geographical parts of the G20 nations and the developed countries. PMID:26415139
NASA Astrophysics Data System (ADS)
Kangasaho, V. E.; Tsuruta, A.; Aalto, T.; Backman, L. B.; Houweling, S.; Krol, M. C.; Peters, W.; van der Laan-Luijkx, I. T.; Lienert, S.; Joos, F.; Dlugokencky, E. J.; Michael, S.; White, J. W. C.
2017-12-01
The atmospheric burden of CH4 has more than doubled since preindustrial time. Evaluating the contribution from anthropogenic and natural emissions to the global methane budget is of great importance to better understand the significance of different sources at the global scale, and their contribution to changes in growth rate of atmospheric CH4 before and after 2006. In addition, observations of δ13C-CH4 suggest an increase in natural sources after 2006, which matches the observed increase and variation of CH4 abudance. Methane emission sources can be identified using δ13C-CH4, because different sources produce methane with process-specific isotopic signatures. This study focuses on inversion model based estimates of global anthropogenic and natural methane emission rates to evaluate the existing methane emission estimates with a new δ13C-CH4 inversion system. In situ measurements of atmospheric methane and δ13C-CH4 isotopic signature, provided by the NOAA Global Monitoring Division and the Institute of Arctic and Alpine Research, will be assimilated into the CTDAS-13C-CH4. The system uses the TM5 atmospheric transport model as an observation operator, constrained by ECMWF ERA Interim meteorological fields, and off-line TM5 chemistry fields to account for the atmospheric methane sink. LPX-Bern DYPTOP ecosystem model is used for prior natural methane emissions from wetlands, peatlands and mineral soils, GFED v4 for prior fire emissions and EDGAR v4.2 FT2010 inventory for prior anthropogenic emissions. The EDGAR antropogenic emissions are re-divided into enteric fermentation and manure management, landfills and waste water, rice, coal, oil and gas, and residential emissions, and the trend of total emissions is scaled to match optimized anthropogenic emissions from CTE-CH4. In addition to these categories, emissions from termites and oceans are included. Process specific δ13C-CH4 isotopic signatures are assigned to each emission source to estimate 13CH4 fraction in CH4 emissions. Among the priors, anthropogenic and natural emissions are optimized and others are directly imposed from the prior. A detailed emission estimates of antropogenic and natural CH4 emissions will be constructed in order to provide a more comprehensive understanding of methane emission source divisions.
NASA Astrophysics Data System (ADS)
Launois, T.; Peylin, P.; Belviso, S.; Poulter, B.
2015-08-01
Clear analogies between carbonyl sulfide (OCS) and carbon dioxide (CO2) diffusion pathways through leaves have been revealed by experimental studies, with plant uptake playing an important role for the atmospheric budget of both species. Here we use atmospheric OCS to evaluate the gross primary production (GPP) of three dynamic global vegetation models (Lund-Potsdam-Jena, LPJ; National Center for Atmospheric Research - Community Land Model 4, NCAR-CLM4; and Organising Carbon and Hydrology In Dynamic Ecosystems, ORCHIDEE). Vegetation uptake of OCS is modeled as a linear function of GPP and leaf relative uptake (LRU), the ratio of OCS to CO2 deposition velocities of plants. New parameterizations for the non-photosynthetic sinks (oxic soils, atmospheric oxidation) and biogenic sources (oceans and anoxic soils) of OCS are also provided. Despite new large oceanic emissions, global OCS budgets created with each vegetation model show exceeding sinks by several hundred Gg S yr-1. An inversion of the surface fluxes (optimization of a global scalar which accounts for flux uncertainties) led to balanced OCS global budgets, as atmospheric measurements suggest, mainly by drastic reduction (up to -50 %) in soil and vegetation uptakes. The amplitude of variations in atmospheric OCS mixing ratios is mainly dictated by the vegetation sink over the Northern Hemisphere. This allows for bias recognition in the GPP representations of the three selected models. The main bias patterns are (i) the terrestrial GPP of ORCHIDEE at high northern latitudes is currently overestimated, (ii) the seasonal variations of the GPP are out of phase in the NCAR-CLM4 model, showing a maximum carbon uptake too early in spring in the northernmost ecosystems, (iii) the overall amplitude of the seasonal variations of GPP in NCAR-CLM4 is too small, and (iv) for the LPJ model, the GPP is slightly out of phase for the northernmost ecosystems and the respiration fluxes might be too large in summer in the Northern Hemisphere. These results rely on the robustness of the OCS modeling framework and, in particular, the choice of the LRU values (assumed constant in time) and the parameterization of soil OCS uptake with small seasonal variations. Refined optimization with regional-scale and seasonally varying coefficients might help to test some of these hypothesis.
Design Optimization Toolkit: Users' Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilo Valentin, Miguel Alejandro
The Design Optimization Toolkit (DOTk) is a stand-alone C++ software package intended to solve complex design optimization problems. DOTk software package provides a range of solution methods that are suited for gradient/nongradient-based optimization, large scale constrained optimization, and topology optimization. DOTk was design to have a flexible user interface to allow easy access to DOTk solution methods from external engineering software packages. This inherent flexibility makes DOTk barely intrusive to other engineering software packages. As part of this inherent flexibility, DOTk software package provides an easy-to-use MATLAB interface that enables users to call DOTk solution methods directly from the MATLABmore » command window.« less
Helicopter Control Energy Reduction Using Moving Horizontal Tail
Oktay, Tugrul; Sal, Firat
2015-01-01
Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841
Type-Separated Bytecode - Its Construction and Evaluation
NASA Astrophysics Data System (ADS)
Adler, Philipp; Amme, Wolfram
A lot of constrained systems still use interpreters to run mobile applications written in Java. These interpreters demand for only a few resources. On the other hand, it is difficult to apply optimizations during the runtime of the application. Annotations could be used to achieve a simpler and faster code analysis, which would allow optimizations even for interpreters on constrained devices. Unfortunately, there is no viable way of transporting annotations to and verifying them at the code consumer. In this paper we present type-separated bytecode as an intermediate representation which allows to safely transport annotations as type-extensions. We have implemented several versions of this system and show that it is possible to obtain a performance comparable to Java Bytecode, even though we use a type-separated system with annotations.
Optimal mistuning for enhanced aeroelastic stability of transonic fans
NASA Technical Reports Server (NTRS)
Hall, K. C.; Crawley, E. F.
1983-01-01
An inverse design procedure was developed for the design of a mistuned rotor. The design requirements are that the stability margin of the eigenvalues of the aeroelastic system be greater than or equal to some minimum stability margin, and that the mass added to each blade be positive. The objective was to achieve these requirements with a minimal amount of mistuning. Hence, the problem was posed as a constrained optimization problem. The constrained minimization problem was solved by the technique of mathematical programming via augmented Lagrangians. The unconstrained minimization phase of this technique was solved by the variable metric method. The bladed disk was modelled as being composed of a rigid disk mounted on a rigid shaft. Each of the blades were modelled with a single tosional degree of freedom.
A chance-constrained stochastic approach to intermodal container routing problems.
Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.
A chance-constrained stochastic approach to intermodal container routing problems
Zhao, Yi; Zhang, Xi; Whiteing, Anthony
2018-01-01
We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost. PMID:29438389
Optimization by nonhierarchical asynchronous decomposition
NASA Technical Reports Server (NTRS)
Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.
1992-01-01
Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.
An Optimization Framework for Dynamic, Distributed Real-Time Systems
NASA Technical Reports Server (NTRS)
Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara
2003-01-01
Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.
Quadratic Optimization in the Problems of Active Control of Sound
NASA Technical Reports Server (NTRS)
Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).
Simple summation rule for optimal fixation selection in visual search.
Najemnik, Jiri; Geisler, Wilson S
2009-06-01
When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard
2012-01-01
The ability to estimate the fraction of ground flashes in a set of flashes observed by a satellite lightning imager, such as the future GOES-R Geostationary Lightning Mapper (GLM), would likely improve operational and scientific applications (e.g., severe weather warnings, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method, called the Ground Flash Fraction Retrieval Algorithm (GoFFRA), was recently developed for estimating the ground flash fraction. The method uses a constrained mixed exponential distribution model to describe a particular lightning optical measurement called the Maximum Group Area (MGA). To obtain the optimum model parameters (one of which is the desired ground flash fraction), a scalar function must be minimized. This minimization is difficult because of two problems: (1) Label Switching (LS), and (2) Parameter Identity Theft (PIT). The LS problem is well known in the literature on mixed exponential distributions, and the PIT problem was discovered in this study. Each problem occurs when one allows the numerical minimizer to freely roam through the parameter search space; this allows certain solution parameters to interchange roles which leads to fundamental ambiguities, and solution error. A major accomplishment of this study is that we have employed a state-of-the-art genetic-based global optimization algorithm called Differential Evolution (DE) that constrains the parameter search in such a way as to remove both the LS and PIT problems. To test the performance of the GoFFRA when DE is employed, we applied it to analyze simulated MGA datasets that we generated from known mixed exponential distributions. Moreover, we evaluated the GoFFRA/DE method by applying it to analyze actual MGAs derived from low-Earth orbiting lightning imaging sensor data; the actual MGA data were classified as either ground or cloud flash MGAs using National Lightning Detection Network[TM] (NLDN) data. Solution error plots are provided for both the simulations and actual data analyses.
NASA Astrophysics Data System (ADS)
Xu, Yunjun; Remeikas, Charles; Pham, Khanh
2014-03-01
Cooperative trajectory planning is crucial for networked vehicles to respond rapidly in cluttered environments and has a significant impact on many applications such as air traffic or border security monitoring and assessment. One of the challenges in cooperative planning is to find a computationally efficient algorithm that can accommodate both the complexity of the environment and real hardware and configuration constraints of vehicles in the formation. Inspired by a local pursuit strategy observed in foraging ants, feasible and optimal trajectory planning algorithms are proposed in this paper for a class of nonlinear constrained cooperative vehicles in environments with densely populated obstacles. In an iterative hierarchical approach, the local behaviours, such as the formation stability, obstacle avoidance, and individual vehicle's constraints, are considered in each vehicle's (i.e. follower's) decentralised optimisation. The cooperative-level behaviours, such as the inter-vehicle collision avoidance, are considered in the virtual leader's centralised optimisation. Early termination conditions are derived to reduce the computational cost by not wasting time in the local-level optimisation if the virtual leader trajectory does not satisfy those conditions. The expected advantages of the proposed algorithms are (1) the formation can be globally asymptotically maintained in a decentralised manner; (2) each vehicle decides its local trajectory using only the virtual leader and its own information; (3) the formation convergence speed is controlled by one single parameter, which makes it attractive for many practical applications; (4) nonlinear dynamics and many realistic constraints, such as the speed limitation and obstacle avoidance, can be easily considered; (5) inter-vehicle collision avoidance can be guaranteed in both the formation transient stage and the formation steady stage; and (6) the computational cost in finding both the feasible and optimal solutions is low. In particular, the feasible solution can be computed in a very quick fashion. The minimum energy trajectory planning for a group of robots in an obstacle-laden environment is simulated to showcase the advantages of the proposed algorithms.
Optimizing Orbit-Instrument Configuration for Global Precipitation Mission (GPM) Satellite Fleet
NASA Technical Reports Server (NTRS)
Smith, Eric A.; Adams, James; Baptista, Pedro; Haddad, Ziad; Iguchi, Toshio; Im, Eastwood; Kummerow, Christian; Einaudi, Franco (Technical Monitor)
2001-01-01
Following the scientific success of the Tropical Rainfall Measuring Mission (TRMM) spearheaded by a group of NASA and NASDA scientists, their external scientific collaborators, and additional investigators within the European Union's TRMM Research Program (EUROTRMM), there has been substantial progress towards the development of a new internationally organized, global scale, and satellite-based precipitation measuring mission. The highlights of this newly developing mission are a greatly expanded scope of measuring capability and a more diversified set of science objectives. The mission is called the Global Precipitation Mission (GPM). Notionally, GPM will be a constellation-type mission involving a fleet of nine satellites. In this fleet, one member is referred to as the "core" spacecraft flown in an approximately 70 degree inclined non-sun-synchronous orbit, somewhat similar to TRMM in that it carries both a multi-channel polarized passive microwave radiometer (PMW) and a radar system, but in this case it will be a dual frequency Ku-Ka band radar system enabling explicit measurements of microphysical DSD properties. The remainder of fleet members are eight orbit-synchronized, sun-synchronous "constellation" spacecraft each carrying some type of multi-channel PMW radiometer, enabling no worse than 3-hour diurnal sampling over the entire globe. In this configuration the "core" spacecraft serves as a high quality reference platform for training and calibrating the PMW rain retrieval algorithms used with the "constellation" radiometers. Within NASA, GPM has advanced to the pre-formulation phase which has enabled the initiation of a set of science and technology studies which will help lead to the final mission design some time in the 2003 period. This presentation first provides an overview of the notional GPM program and mission design, including its organizational and programmatic concepts, scientific agenda, expected instrument package, and basic flight architecture. Following this introduction, we focus specifically on the last topic, that being an analysis which leads to an optimal flight architecture dictated in part by science requirements but constrained by allowable orbital mechanics, instrument scan patterns, and antenna aperture properties. Because the optimal architecture involves an interplay between orbit mechanics and instrument specifications, it is important to recognize that in attempting to serve various scientific themes, the final optimal architecture will represent a compromise concerning dynamic range, spatial resolution, sampling interval, pointing, beam coincidence, and measurement uncertainty. Moreover, cost becomes a major factor in seeking the optimal architecture through the pathways of antenna and instrument scan designs, as well as propulsion requirements associated with the orbit heights of various "constellation" members. Although the results presented at the IGARSS-2001 meeting will likely not be the fully refined flight architecture specifications, they are expected to be nearly complete.
Cheng, Wen-Chang
2012-01-01
In this paper we propose a robust lane detection and tracking method by combining particle filters with the particle swarm optimization method. This method mainly uses the particle filters to detect and track the local optimum of the lane model in the input image and then seeks the global optimal solution of the lane model by a particle swarm optimization method. The particle filter can effectively complete lane detection and tracking in complicated or variable lane environments. However, the result obtained is usually a local optimal system status rather than the global optimal system status. Thus, the particle swarm optimization method is used to further refine the global optimal system status in all system statuses. Since the particle swarm optimization method is a global optimization algorithm based on iterative computing, it can find the global optimal lane model by simulating the food finding way of fish school or insects under the mutual cooperation of all particles. In verification testing, the test environments included highways and ordinary roads as well as straight and curved lanes, uphill and downhill lanes, lane changes, etc. Our proposed method can complete the lane detection and tracking more accurately and effectively then existing options. PMID:23235453
Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0)
NASA Astrophysics Data System (ADS)
Sauerland, Volkmar; Löptien, Ulrike; Leonhard, Claudine; Oschlies, Andreas; Srivastav, Anand
2018-03-01
Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate), which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative). We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.
Minimum cost to control bovine tuberculosis in cow-calf herds
Smith, Rebecca L.; Tauer, Loren W.; Sanderson, Michael W.; Grohn, Yrjo T.
2014-01-01
Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2–3 month testing interval. However, the optimal testing interval for governments was increased to 2–4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal’s herd of origin can be identified would result in faster detection times. PMID:24703601
Minimum cost to control bovine tuberculosis in cow-calf herds.
Smith, Rebecca L; Tauer, Loren W; Sanderson, Michael W; Gröhn, Yrjo T
2014-07-01
Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2-3 month testing interval. However, the optimal testing interval for governments was increased to 2-4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal's herd of origin can be identified would result in faster detection times. Copyright © 2014 Elsevier B.V. All rights reserved.
Scope of Gradient and Genetic Algorithms in Multivariable Function Optimization
NASA Technical Reports Server (NTRS)
Shaykhian, Gholam Ali; Sen, S. K.
2007-01-01
Global optimization of a multivariable function - constrained by bounds specified on each variable and also unconstrained - is an important problem with several real world applications. Deterministic methods such as the gradient algorithms as well as the randomized methods such as the genetic algorithms may be employed to solve these problems. In fact, there are optimization problems where a genetic algorithm/an evolutionary approach is preferable at least from the quality (accuracy) of the results point of view. From cost (complexity) point of view, both gradient and genetic approaches are usually polynomial-time; there are no serious differences in this regard, i.e., the computational complexity point of view. However, for certain types of problems, such as those with unacceptably erroneous numerical partial derivatives and those with physically amplified analytical partial derivatives whose numerical evaluation involves undesirable errors and/or is messy, a genetic (stochastic) approach should be a better choice. We have presented here the pros and cons of both the approaches so that the concerned reader/user can decide which approach is most suited for the problem at hand. Also for the function which is known in a tabular form, instead of an analytical form, as is often the case in an experimental environment, we attempt to provide an insight into the approaches focusing our attention toward accuracy. Such an insight will help one to decide which method, out of several available methods, should be employed to obtain the best (least error) output. *
NASA Astrophysics Data System (ADS)
Yoo, S.; Zeng, X. C.
2006-05-01
We performed a constrained search for the geometries of low-lying neutral germanium clusters GeN in the size range of 21⩽N⩽29. The basin-hopping global optimization method is employed for the search. The potential-energy surface is computed based on the plane-wave pseudopotential density functional theory. A new series of low-lying clusters is found on the basis of several generic structural motifs identified previously for silicon clusters [S. Yoo and X. C. Zeng, J. Chem. Phys. 124, 054304 (2006)] as well as for smaller-sized germanium clusters [S. Bulusu et al., J. Chem. Phys. 122, 164305 (2005)]. Among the generic motifs examined, we found that two motifs stand out in producing most low-lying clusters, namely, the six/nine motif, a puckered-hexagonal-ring Ge6 unit attached to a tricapped trigonal prism Ge9, and the six/ten motif, a puckered-hexagonal-ring Ge6 unit attached to a bicapped antiprism Ge10. The low-lying clusters obtained are all prolate in shape and their energies are appreciably lower than the near-spherical low-energy clusters. This result is consistent with the ion-mobility measurement in that medium-sized germanium clusters detected are all prolate in shape until the size N ˜65.
Hyla, M
2017-12-01
Network-forming As 2 (S/Se) m nanoclusters are employed to recognize expected variations in a vicinity of some remarkable compositions in binary As-Se/S glassy systems accepted as signatures of optimally constrained intermediate topological phases in earlier temperature-modulated differential scanning calorimetry experiments. The ab initio quantum chemical calculations performed using the cation-interlinking network cluster approach show similar oscillating character in tendency to local chemical decomposition but obvious step-like behavior in preference to global phase separation on boundary chemical compounds (pure chalcogen and stoichiometric arsenic chalcogenides). The onsets of stability are defined for chalcogen-rich glasses, these being connected with As 2 Se 5 (Z = 2.29) and As 2 S 6 (Z = 2.25) nanoclusters for As-Se and As-S glasses, respectively. The physical aging effects result preferentially from global phase separation in As-S glass system due to high localization of covalent bonding and local demixing on neighboring As 2 Se m+1 and As 2 Se m-1 nanoclusters in As-Se system. These nanoclusters well explain the lower limits of reversibility windows in temperature-modulated differential scanning calorimetry, but they cannot be accepted as signatures of topological phase transitions in respect to the rigidity theory.
River velocities from sequential multispectral remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Wei; Mied, Richard P.
2013-06-01
We address the problem of extracting surface velocities from a pair of multispectral remote sensing images over rivers using a new nonlinear multiple-tracer form of the global optimal solution (GOS). The derived velocity field is a valid solution across the image domain to the nonlinear system of equations obtained by minimizing a cost function inferred from the conservation constraint equations for multiple tracers. This is done by deriving an iteration equation for the velocity, based on the multiple-tracer displaced frame difference equations, and a local approximation to the velocity field. The number of velocity equations is greater than the number of velocity components, and thus overly constrain the solution. The iterative technique uses Gauss-Newton and Levenberg-Marquardt methods and our own algorithm of the progressive relaxation of the over-constraint. We demonstrate the nonlinear multiple-tracer GOS technique with sequential multispectral Landsat and ASTER images over a portion of the Potomac River in MD/VA, and derive a dense field of accurate velocity vectors. We compare the GOS river velocities with those from over 12 years of data at four NOAA reference stations, and find good agreement. We discuss how to find the appropriate spatial and temporal resolutions to allow optimization of the technique for specific rivers.
Scale dependence of open c{\\bar{c}} and b{\\bar{b}} production in the low x region
NASA Astrophysics Data System (ADS)
Oliveira, E. G. de; Martin, A. D.; Ryskin, M. G.
2017-03-01
The `optimal' factorization scale μ _0 is calculated for open heavy quark production. We find that the optimal value is μ _F=μ _0˜eq 0.85√{p^2_T+m_Q^2} ; a choice which allows us to resum the double-logarithmic, (α _s ln μ ^2_F ln (1/x))^n corrections (enhanced at LHC energies by large values of ln (1/x)) and to move them into the incoming parton distributions, PDF(x,μ _0^2). Besides this result for the single inclusive cross section (corresponding to an observed heavy quark of transverse momentum p_T), we also determined the scale for processes where the acoplanarity can be measured; that is, events where the azimuthal angle between the quark and the antiquark may be determined experimentally. Moreover, we discuss the important role played by the 2→ 2 subprocesses, gg→ Q\\bar{Q} at NLO and higher orders. In summary, we achieve a better stability of the QCD calculations, so that the data on c{\\bar{c}} and b{\\bar{b}} production can be used to further constrain the gluons in the small x, relatively low scale, domain, where the uncertainties of the global analyses are large at present.
Tests of the Grobner Basis Solution for Lightning Ground Flash Fraction Retrieval
NASA Technical Reports Server (NTRS)
Koshak, William; Solakiewicz, Richard; Attele, Rohan
2011-01-01
Satellite lightning imagers such as the NASA Tropical Rainfall Measuring Mission Lightning Imaging Sensor (TRMM/LIS) and the future GOES-R Geostationary Lightning Mapper (GLM) are designed to detect total lightning (ground flashes + cloud flashes). However, there is a desire to discriminate ground flashes from cloud flashes from the vantage point of space since this would enhance the overall information content of the satellite lightning data and likely improve its operational and scientific applications (e.g., in severe weather warning, lightning nitrogen oxides studies, and global electric circuit analyses). A Bayesian inversion method was previously introduced for retrieving the fraction of ground flashes in a set of flashes observed from a satellite lightning imager. The method employed a constrained mixed exponential distribution model to describe the lightning optical measurements. To obtain the optimum model parameters (one of which is the ground flash fraction), a scalar function was minimized by a numerical method. In order to improve this optimization, a Grobner basis solution was introduced to obtain analytic representations of the model parameters that serve as a refined initialization scheme to the numerical optimization. In this study, we test the efficacy of the Grobner basis initialization using actual lightning imager measurements and ground flash truth derived from the national lightning network.
A variational approach to probing extreme events in turbulent dynamical systems
Farazmand, Mohammad; Sapsis, Themistoklis P.
2017-01-01
Extreme events are ubiquitous in a wide range of dynamical systems, including turbulent fluid flows, nonlinear waves, large-scale networks, and biological systems. We propose a variational framework for probing conditions that trigger intermittent extreme events in high-dimensional nonlinear dynamical systems. We seek the triggers as the probabilistically feasible solutions of an appropriately constrained optimization problem, where the function to be maximized is a system observable exhibiting intermittent extreme bursts. The constraints are imposed to ensure the physical admissibility of the optimal solutions, that is, significant probability for their occurrence under the natural flow of the dynamical system. We apply the method to a body-forced incompressible Navier-Stokes equation, known as the Kolmogorov flow. We find that the intermittent bursts of the energy dissipation are independent of the external forcing and are instead caused by the spontaneous transfer of energy from large scales to the mean flow via nonlinear triad interactions. The global maximizer of the corresponding variational problem identifies the responsible triad, hence providing a precursor for the occurrence of extreme dissipation events. Specifically, monitoring the energy transfers within this triad allows us to develop a data-driven short-term predictor for the intermittent bursts of energy dissipation. We assess the performance of this predictor through direct numerical simulations. PMID:28948226
NASA Astrophysics Data System (ADS)
Miyazaki, K.; Eskes, H.; Sudo, K.
2012-04-01
Carbon monoxide (CO) and nitrogen oxides (NOx) play an important role in tropospheric chemistry through their influences on the ozone and hydroxyl radical (OH). The simultaneous optimization of various chemical components is expected to improve the emission inversion through the better description of the chemical feedbacks in the NOx- and CO-chemistry. This study aims to reproduce chemical composition distributions in the troposphere by combining information obtained from multiple satellite data sets. The emissions of CO and NOx, together with the 3D concentration fields of all forecasted chemical species in the global CTM CHASER have been simultaneously optimized using the ensemble Kalman filter (EnKF) data assimilation technique, and NO2, O3, CO, and HNO3 data obtained from OMI, TES, MOPITT, and MLS satellite measurements. The performance is evaluated against independent data from ozone sondes, aircraft measurements, GOME-2, and SCIAMACHY satellite data. Observing System Experiments (OSEs) have been carried out. These OSEs quantify the relative importance of each data set on constraining the emissions and concentrations. We confirmed that the simultaneous data assimilation improved the agreement with these independent data sets. The combined analysis of multiple data sets by means of advanced data assimilation system can provide a useful framework for the air quality research.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
Guérin, Bastien; Setsompop, Kawin; Ye, Huihui; Poser, Benedikt A; Stenger, Andrew V; Wald, Lawrence L
2015-05-01
To design parallel transmit (pTx) simultaneous multislice (SMS) spokes pulses with explicit control for peak power and local and global specific absorption rate (SAR). We design SMS pTx least-squares and magnitude least squares spokes pulses while constraining local SAR using the virtual observation points (VOPs) compression of SAR matrices. We evaluate our approach in simulations of a head (7T) and a body (3T) coil with eight channels arranged in two z-rows. For many of our simulations, control of average power by Tikhonov regularization of the SMS pTx spokes pulse design yielded pulses that violated hardware and SAR safety limits. On the other hand, control of peak power alone yielded pulses that violated local SAR limits. Pulses optimized with control of both local SAR and peak power satisfied all constraints and therefore had the best excitation performance under limited power and SAR constraints. These results extend our previous results for single slice pTx excitations but are more pronounced because of the large power demands and SAR of SMS pulses. Explicit control of local SAR and peak power is required to generate optimal SMS pTx excitations satisfying both the system's hardware limits and regulatory safety limits. © 2014 Wiley Periodicals, Inc.
Constrained H1-regularization schemes for diffeomorphic image registration
Mang, Andreas; Biros, George
2017-01-01
We propose regularization schemes for deformable registration and efficient algorithms for their numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space (Gauss–)Newton–Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. Our current implementation is limited to the two-dimensional case. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient. PMID:29075361
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Conditional Entropy-Constrained Residual VQ with Application to Image Coding
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.
1996-01-01
This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.
Parameter estimation of a pulp digester model with derivative-free optimization strategies
NASA Astrophysics Data System (ADS)
Seiça, João C.; Romanenko, Andrey; Fernandes, Florbela P.; Santos, Lino O.; Fernandes, Natércia C. P.
2017-07-01
The work concerns the parameter estimation in the context of the mechanistic modelling of a pulp digester. The problem is cast as a box bounded nonlinear global optimization problem in order to minimize the mismatch between the model outputs with the experimental data observed at a real pulp and paper plant. MCSFilter and Simulated Annealing global optimization methods were used to solve the optimization problem. While the former took longer to converge to the global minimum, the latter terminated faster at a significantly higher value of the objective function and, thus, failed to find the global solution.
OPUS: Optimal Projection for Uncertain Systems. Volume 1
1991-09-01
unifiedI control- design methodology that directly addresses these technology issues. 1 In particular, optimal projection theory addresses the need for...effects, and limited identification accuracy in a 1-g environment. The principal contribution of OPUS is a unified design methodology that...characterizing solutions to constrained control- design problems. Transforming OPUS into a practi- cal design methodology requires the development of
Ma, Jun; Chen, Si-Lu; Kamaldin, Nazir; Teo, Chek Sing; Tay, Arthur; Mamun, Abdullah Al; Tan, Kok Kiong
2017-11-01
The biaxial gantry is widely used in many industrial processes that require high precision Cartesian motion. The conventional rigid-link version suffers from breaking down of joints if any de-synchronization between the two carriages occurs. To prevent above potential risk, a flexure-linked biaxial gantry is designed to allow a small rotation angle of the cross-arm. Nevertheless, the chattering of control signals and inappropriate design of the flexure joint will possibly induce resonant modes of the end-effector. Thus, in this work, the design requirements in terms of tracking accuracy, biaxial synchronization, and resonant mode suppression are achieved by integrated optimization of the stiffness of flexures and PID controller parameters for a class of point-to-point reference trajectories with same dynamics but different steps. From here, an H 2 optimization problem with defined constraints is formulated, and an efficient iterative solver is proposed by hybridizing direct computation of constrained projection gradient and line search of optimal step. Comparative experimental results obtained on the testbed are presented to verify the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
High-Fidelity Multidisciplinary Design Optimization of Aircraft Configurations
NASA Technical Reports Server (NTRS)
Martins, Joaquim R. R. A.; Kenway, Gaetan K. W.; Burdette, David; Jonsson, Eirikur; Kennedy, Graeme J.
2017-01-01
To evaluate new airframe technologies we need design tools based on high-fidelity models that consider multidisciplinary interactions early in the design process. The overarching goal of this NRA is to develop tools that enable high-fidelity multidisciplinary design optimization of aircraft configurations, and to apply these tools to the design of high aspect ratio flexible wings. We develop a geometry engine that is capable of quickly generating conventional and unconventional aircraft configurations including the internal structure. This geometry engine features adjoint derivative computation for efficient gradient-based optimization. We also added overset capability to a computational fluid dynamics solver, complete with an adjoint implementation and semiautomatic mesh generation. We also developed an approach to constraining buffet and started the development of an approach for constraining utter. On the applications side, we developed a new common high-fidelity model for aeroelastic studies of high aspect ratio wings. We performed optimal design trade-o s between fuel burn and aircraft weight for metal, conventional composite, and carbon nanotube composite wings. We also assessed a continuous morphing trailing edge technology applied to high aspect ratio wings. This research resulted in the publication of 26 manuscripts so far, and the developed methodologies were used in two other NRAs. 1
Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai
2015-07-01
The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.
Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher
2013-10-01
This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.
Namazi-Rad, Mohammad-Reza; Dunbar, Michelle; Ghaderi, Hadi; Mokhtarian, Payam
2015-01-01
To achieve greater transit-time reduction and improvement in reliability of transport services, there is an increasing need to assist transport planners in understanding the value of punctuality; i.e. the potential improvements, not only to service quality and the consumer but also to the actual profitability of the service. In order for this to be achieved, it is important to understand the network-specific aspects that affect both the ability to decrease transit-time, and the associated cost-benefit of doing so. In this paper, we outline a framework for evaluating the effectiveness of proposed changes to average transit-time, so as to determine the optimal choice of average arrival time subject to desired punctuality levels whilst simultaneously minimizing operational costs. We model the service transit-time variability using a truncated probability density function, and simultaneously compare the trade-off between potential gains and increased service costs, for several commonly employed cost-benefit functions of general form. We formulate this problem as a constrained optimization problem to determine the optimal choice of average transit time, so as to increase the level of service punctuality, whilst simultaneously ensuring a minimum level of cost-benefit to the service operator. PMID:25992902
Pseudo-time methods for constrained optimization problems governed by PDE
NASA Technical Reports Server (NTRS)
Taasan, Shlomo
1995-01-01
In this paper we present a novel method for solving optimization problems governed by partial differential equations. Existing methods are gradient information in marching toward the minimum, where the constrained PDE is solved once (sometimes only approximately) per each optimization step. Such methods can be viewed as a marching techniques on the intersection of the state and costate hypersurfaces while improving the residuals of the design equations per each iteration. In contrast, the method presented here march on the design hypersurface and at each iteration improve the residuals of the state and costate equations. The new method is usually much less expensive per iteration step since, in most problems of practical interest, the design equation involves much less unknowns that that of either the state or costate equations. Convergence is shown using energy estimates for the evolution equations governing the iterative process. Numerical tests show that the new method allows the solution of the optimization problem in a cost of solving the analysis problems just a few times, independent of the number of design parameters. The method can be applied using single grid iterations as well as with multigrid solvers.
Automation of POST Cases via External Optimizer and "Artificial p2" Calculation
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.
2017-01-01
During early conceptual design of complex systems, speed and accuracy are often at odds with one another. While many characteristics of the design are fluctuating rapidly during this phase there is nonetheless a need to acquire accurate data from which to down-select designs as these decisions will have a large impact upon program life-cycle cost. Therefore enabling the conceptual designer to produce accurate data in a timely manner is tantamount to program viability. For conceptual design of launch vehicles, trajectory analysis and optimization is a large hurdle. Tools such as the industry standard Program to Optimize Simulated Trajectories (POST) have traditionally required an expert in the loop for setting up inputs, running the program, and analyzing the output. The solution space for trajectory analysis is in general non-linear and multi-modal requiring an experienced analyst to weed out sub-optimal designs in pursuit of the global optimum. While an experienced analyst presented with a vehicle similar to one which they have already worked on can likely produce optimal performance figures in a timely manner, as soon as the "experienced" or "similar" adjectives are invalid the process can become lengthy. In addition, an experienced analyst working on a similar vehicle may go into the analysis with preconceived ideas about what the vehicle's trajectory should look like which can result in sub-optimal performance being recorded. Thus, in any case but the ideal either time or accuracy can be sacrificed. In the authors' previous work a tool called multiPOST was created which captures the heuristics of a human analyst over the process of executing trajectory analysis with POST. However without the instincts of a human in the loop, this method relied upon Monte Carlo simulation to find successful trajectories. Overall the method has mixed results, and in the context of optimizing multiple vehicles it is inefficient in comparison to the method presented POST's internal optimizer functions like any other gradient-based optimizer. It has a specified variable to optimize whose value is represented as optval, a set of dependent constraints to meet with associated forms and tolerances whose value is represented as p2, and a set of independent variables known as the u-vector to modify in pursuit of optimality. Each of these quantities are calculated or manipulated at a certain phase within the trajectory. The optimizer is further constrained by the requirement that the input u-vector must result in a trajectory which proceeds through each of the prescribed events in the input file. For example, if the input u-vector causes the vehicle to crash before it can achieve the orbital parameters required for a parking orbit, then the run will fail without engaging the optimizer, and a p2 value of exactly zero is returned. This poses a problem, as this "non-connecting" region of the u-vector space is far larger than the "connecting" region which returns a non-zero value of p2 and can be worked on by the internal optimizer. Finding this connecting region and more specifically the global optimum within this region has traditionally required the use of an expert analyst.
NASA Astrophysics Data System (ADS)
Kurzweil, Yair; Head-Gordon, Martin
2009-07-01
We develop a method that can constrain any local exchange-correlation potential to preserve basic exact conditions. Using the method of Lagrange multipliers, we calculate for each set of given Kohn-Sham orbitals a constraint-preserving potential which is closest to the given exchange-correlation potential. The method is applicable to both the time-dependent (TD) and independent cases. The exact conditions that are enforced for the time-independent case are Galilean covariance, zero net force and torque, and Levy-Perdew virial theorem. For the time-dependent case we enforce translational covariance, zero net force, Levy-Perdew virial theorem, and energy balance. We test our method on the exchange (only) Krieger-Li-Iafrate (xKLI) approximate-optimized effective potential for both cases. For the time-independent case, we calculated the ground state properties of some hydrogen chains and small sodium clusters for some constrained xKLI potentials and Hartree-Fock (HF) exchange. The results (total energy, Kohn-Sham eigenvalues, polarizability, and hyperpolarizability) indicate that enforcing the exact conditions is not important for these cases. On the other hand, in the time-dependent case, constraining both energy balance and zero net force yields improved results relative to TDHF calculations. We explored the electron dynamics in small sodium clusters driven by cw laser pulses. For each laser pulse we compared calculations from TD constrained xKLI, TD partially constrained xKLI, and TDHF. We found that electron dynamics such as electron ionization and moment of inertia dynamics for the constrained xKLI are most similar to the TDHF results. Also, energy conservation is better by at least one order of magnitude with respect to the unconstrained xKLI. We also discuss the problems that arise in satisfying constraints in the TD case with a non-cw driving force.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurzweil, Yair; Head-Gordon, Martin
2009-07-15
We develop a method that can constrain any local exchange-correlation potential to preserve basic exact conditions. Using the method of Lagrange multipliers, we calculate for each set of given Kohn-Sham orbitals a constraint-preserving potential which is closest to the given exchange-correlation potential. The method is applicable to both the time-dependent (TD) and independent cases. The exact conditions that are enforced for the time-independent case are Galilean covariance, zero net force and torque, and Levy-Perdew virial theorem. For the time-dependent case we enforce translational covariance, zero net force, Levy-Perdew virial theorem, and energy balance. We test our method on the exchangemore » (only) Krieger-Li-Iafrate (xKLI) approximate-optimized effective potential for both cases. For the time-independent case, we calculated the ground state properties of some hydrogen chains and small sodium clusters for some constrained xKLI potentials and Hartree-Fock (HF) exchange. The results (total energy, Kohn-Sham eigenvalues, polarizability, and hyperpolarizability) indicate that enforcing the exact conditions is not important for these cases. On the other hand, in the time-dependent case, constraining both energy balance and zero net force yields improved results relative to TDHF calculations. We explored the electron dynamics in small sodium clusters driven by cw laser pulses. For each laser pulse we compared calculations from TD constrained xKLI, TD partially constrained xKLI, and TDHF. We found that electron dynamics such as electron ionization and moment of inertia dynamics for the constrained xKLI are most similar to the TDHF results. Also, energy conservation is better by at least one order of magnitude with respect to the unconstrained xKLI. We also discuss the problems that arise in satisfying constraints in the TD case with a non-cw driving force.« less
NASA Technical Reports Server (NTRS)
Swei, Sean
2014-01-01
We propose to develop a robust guidance and control system for the ADEPT (Adaptable Deployable Entry and Placement Technology) entry vehicle. A control-centric model of ADEPT will be developed to quantify the performance of candidate guidance and control architectures for both aerocapture and precision landing missions. The evaluation will be based on recent breakthroughs in constrained controllability/reachability analysis of control systems and constrained-based energy-minimum trajectory optimization for guidance development operating in complex environments.
Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2014-10-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.
Global Design Optimization for Fluid Machinery Applications
NASA Technical Reports Server (NTRS)
Shyy, Wei; Papila, Nilay; Tucker, Kevin; Vaidyanathan, Raj; Griffin, Lisa
2000-01-01
Recent experiences in utilizing the global optimization methodology, based on polynomial and neural network techniques for fluid machinery design are summarized. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. Another advantage is that these methods do not need to calculate the sensitivity of each design variable locally. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables and methods for predicting the model performance. Examples of applications selected from rocket propulsion components including a supersonic turbine and an injector element and a turbulent flow diffuser are used to illustrate the usefulness of the global optimization method.
Worldwide data sets constrain the water vapor uptake coefficient in cloud formation
Raatikainen, Tomi; Nenes, Athanasios; Seinfeld, John H.; Morales, Ricardo; Moore, Richard H.; Lathem, Terry L.; Lance, Sara; Padró, Luz T.; Lin, Jack J.; Cerully, Kate M.; Bougiatioti, Aikaterini; Cozic, Julie; Ruehl, Christopher R.; Chuang, Patrick Y.; Anderson, Bruce E.; Flagan, Richard C.; Jonsson, Haflidi; Mihalopoulos, Nikos; Smith, James N.
2013-01-01
Cloud droplet formation depends on the condensation of water vapor on ambient aerosols, the rate of which is strongly affected by the kinetics of water uptake as expressed by the condensation (or mass accommodation) coefficient, αc. Estimates of αc for droplet growth from activation of ambient particles vary considerably and represent a critical source of uncertainty in estimates of global cloud droplet distributions and the aerosol indirect forcing of climate. We present an analysis of 10 globally relevant data sets of cloud condensation nuclei to constrain the value of αc for ambient aerosol. We find that rapid activation kinetics (αc > 0.1) is uniformly prevalent. This finding resolves a long-standing issue in cloud physics, as the uncertainty in water vapor accommodation on droplets is considerably less than previously thought. PMID:23431189
Worldwide data sets constrain the water vapor uptake coefficient in cloud formation.
Raatikainen, Tomi; Nenes, Athanasios; Seinfeld, John H; Morales, Ricardo; Moore, Richard H; Lathem, Terry L; Lance, Sara; Padró, Luz T; Lin, Jack J; Cerully, Kate M; Bougiatioti, Aikaterini; Cozic, Julie; Ruehl, Christopher R; Chuang, Patrick Y; Anderson, Bruce E; Flagan, Richard C; Jonsson, Haflidi; Mihalopoulos, Nikos; Smith, James N
2013-03-05
Cloud droplet formation depends on the condensation of water vapor on ambient aerosols, the rate of which is strongly affected by the kinetics of water uptake as expressed by the condensation (or mass accommodation) coefficient, αc. Estimates of αc for droplet growth from activation of ambient particles vary considerably and represent a critical source of uncertainty in estimates of global cloud droplet distributions and the aerosol indirect forcing of climate. We present an analysis of 10 globally relevant data sets of cloud condensation nuclei to constrain the value of αc for ambient aerosol. We find that rapid activation kinetics (αc > 0.1) is uniformly prevalent. This finding resolves a long-standing issue in cloud physics, as the uncertainty in water vapor accommodation on droplets is considerably less than previously thought.
Evaluating an image-fusion algorithm with synthetic-image-generation tools
NASA Astrophysics Data System (ADS)
Gross, Harry N.; Schott, John R.
1996-06-01
An algorithm that combines spectral mixing and nonlinear optimization is used to fuse multiresolution images. Image fusion merges images of different spatial and spectral resolutions to create a high spatial resolution multispectral combination. High spectral resolution allows identification of materials in the scene, while high spatial resolution locates those materials. In this algorithm, conventional spectral mixing estimates the percentage of each material (called endmembers) within each low resolution pixel. Three spectral mixing models are compared; unconstrained, partially constrained, and fully constrained. In the partially constrained application, the endmember fractions are required to sum to one. In the fully constrained application, all fractions are additionally required to lie between zero and one. While negative fractions seem inappropriate, they can arise from random spectral realizations of the materials. In the second part of the algorithm, the low resolution fractions are used as inputs to a constrained nonlinear optimization that calculates the endmember fractions for the high resolution pixels. The constraints mirror the low resolution constraints and maintain consistency with the low resolution fraction results. The algorithm can use one or more higher resolution sharpening images to locate the endmembers to high spatial accuracy. The algorithm was evaluated with synthetic image generation (SIG) tools. A SIG developed image can be used to control the various error sources that are likely to impair the algorithm performance. These error sources include atmospheric effects, mismodeled spectral endmembers, and variability in topography and illumination. By controlling the introduction of these errors, the robustness of the algorithm can be studied and improved upon. The motivation for this research is to take advantage of the next generation of multi/hyperspectral sensors. Although the hyperspectral images will be of modest to low resolution, fusing them with high resolution sharpening images will produce a higher spatial resolution land cover or material map.
Beyond equilibrium climate sensitivity
NASA Astrophysics Data System (ADS)
Knutti, Reto; Rugenstein, Maria A. A.; Hegerl, Gabriele C.
2017-10-01
Equilibrium climate sensitivity characterizes the Earth's long-term global temperature response to increased atmospheric CO2 concentration. It has reached almost iconic status as the single number that describes how severe climate change will be. The consensus on the 'likely' range for climate sensitivity of 1.5 °C to 4.5 °C today is the same as given by Jule Charney in 1979, but now it is based on quantitative evidence from across the climate system and throughout climate history. The quest to constrain climate sensitivity has revealed important insights into the timescales of the climate system response, natural variability and limitations in observations and climate models, but also concerns about the simple concepts underlying climate sensitivity and radiative forcing, which opens avenues to better understand and constrain the climate response to forcing. Estimates of the transient climate response are better constrained by observed warming and are more relevant for predicting warming over the next decades. Newer metrics relating global warming directly to the total emitted CO2 show that in order to keep warming to within 2 °C, future CO2 emissions have to remain strongly limited, irrespective of climate sensitivity being at the high or low end.
Minimum energy control and optimal-satisfactory control of Boolean control network
NASA Astrophysics Data System (ADS)
Li, Fangfei; Lu, Xiwen
2013-12-01
In the literatures, to transfer the Boolean control network from the initial state to the desired state, the expenditure of energy has been rarely considered. Motivated by this, this Letter investigates the minimum energy control and optimal-satisfactory control of Boolean control network. Based on the semi-tensor product of matrices and Floyd's algorithm, minimum energy, constrained minimum energy and optimal-satisfactory control design for Boolean control network are given respectively. A numerical example is presented to illustrate the efficiency of the obtained results.
Optimal dynamic control of invasions: applying a systematic conservation approach.
Adams, Vanessa M; Setterfield, Samantha A
2015-06-01
The social, economic, and environmental impacts of invasive plants are well recognized. However, these variable impacts are rarely accounted for in the spatial prioritization of funding for weed management. We examine how current spatially explicit prioritization methods can be extended to identify optimal budget allocations to both eradication and control measures of invasive species to minimize the costs and likelihood of invasion. Our framework extends recent approaches to systematic prioritization of weed management to account for multiple values that are threatened by weed invasions with a multi-year dynamic prioritization approach. We apply our method to the northern portion of the Daly catchment in the Northern Territory, which has significant conservation values that are threatened by gamba grass (Andropogon gayanus), a highly invasive species recognized by the Australian government as a Weed of National Significance (WONS). We interface Marxan, a widely applied conservation planning tool, with a dynamic biophysical model of gamba grass to optimally allocate funds to eradication and control programs under two budget scenarios comparing maximizing gain (MaxGain) and minimizing loss (MinLoss) optimization approaches. The prioritizations support previous findings that a MinLoss approach is a better strategy when threats are more spatially variable than conservation values. Over a 10-year simulation period, we find that a MinLoss approach reduces future infestations by ~8% compared to MaxGain in the constrained budget scenarios and ~12% in the unlimited budget scenarios. We find that due to the extensive current invasion and rapid rate of spread, allocating the annual budget to control efforts is more efficient than funding eradication efforts when there is a constrained budget. Under a constrained budget, applying the most efficient optimization scenario (control, minloss) reduces spread by ~27% compared to no control. Conversely, if the budget is unlimited it is more efficient to fund eradication efforts and reduces spread by ~65% compared to no control.
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
Fierce, Laura; McGraw, Robert L.
2017-07-26
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fierce, Laura; McGraw, Robert L.
Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less
Rate-independent dissipation in phase-field modelling of displacive transformations
NASA Astrophysics Data System (ADS)
Tůma, K.; Stupkiewicz, S.; Petryk, H.
2018-05-01
In this paper, rate-independent dissipation is introduced into the phase-field framework for modelling of displacive transformations, such as martensitic phase transformation and twinning. The finite-strain phase-field model developed recently by the present authors is here extended beyond the limitations of purely viscous dissipation. The variational formulation, in which the evolution problem is formulated as a constrained minimization problem for a global rate-potential, is enhanced by including a mixed-type dissipation potential that combines viscous and rate-independent contributions. Effective computational treatment of the resulting incremental problem of non-smooth optimization is developed by employing the augmented Lagrangian method. It is demonstrated that a single Lagrange multiplier field suffices to handle the dissipation potential vertex and simultaneously to enforce physical constraints on the order parameter. In this way, the initially non-smooth problem of evolution is converted into a smooth stationarity problem. The model is implemented in a finite-element code and applied to solve two- and three-dimensional boundary value problems representative for shape memory alloys.
Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.
Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun
2017-07-01
In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.
WaterNet:The NASA Water Cycle Solutions Network
NASA Astrophysics Data System (ADS)
Belvedere, D. R.; Houser, P. R.; Pozzi, W.; Imam, B.; Schiffer, R.; Schlosser, C. A.; Gupta, H.; Martinez, G.; Lopez, V.; Vorosmarty, C.; Fekete, B.; Matthews, D.; Lawford, R.; Welty, C.; Seck, A.
2008-12-01
Water is essential to life and directly impacts and constrains society's welfare, progress, and sustainable growth, and is continuously being transformed by climate change, erosion, pollution, and engineering. Projections of the effects of such factors will remain speculative until more effective global prediction systems and applications are implemented. NASA's unique role is to use its view from space to improve water and energy cycle monitoring and prediction, and has taken steps to collaborate and improve interoperability with existing networks and nodes of research organizations, operational agencies, science communities, and private industry. WaterNet is a Solutions Network, devoted to the identification and recommendation of candidate solutions that propose ways in which water-cycle related NASA research results can be skillfully applied by partner agencies, international organizations, state, and local governments. It is designed to improve and optimize the sustained ability of water cycle researchers, stakeholders, organizations and networks to interact, identify, harness, and extend NASA research results to augment Decision Support Tools that address national needs.
Generating High-Temporal and Spatial Resolution TIR Image Data
NASA Astrophysics Data System (ADS)
Herrero-Huerta, M.; Lagüela, S.; Alfieri, S. M.; Menenti, M.
2017-09-01
Remote sensing imagery to monitor global biophysical dynamics requires the availability of thermal infrared data at high temporal and spatial resolution because of the rapid development of crops during the growing season and the fragmentation of most agricultural landscapes. Conversely, no single sensor meets these combined requirements. Data fusion approaches offer an alternative to exploit observations from multiple sensors, providing data sets with better properties. A novel spatio-temporal data fusion model based on constrained algorithms denoted as multisensor multiresolution technique (MMT) was developed and applied to generate TIR synthetic image data at both temporal and spatial high resolution. Firstly, an adaptive radiance model is applied based on spectral unmixing analysis of . TIR radiance data at TOA (top of atmosphere) collected by MODIS daily 1-km and Landsat - TIRS 16-day sampled at 30-m resolution are used to generate synthetic daily radiance images at TOA at 30-m spatial resolution. The next step consists of unmixing the 30 m (now lower resolution) images using the information about their pixel land-cover composition from co-registered images at higher spatial resolution. In our case study, TIR synthesized data were unmixed to the Sentinel 2 MSI with 10 m resolution. The constrained unmixing preserves all the available radiometric information of the 30 m images and involves the optimization of the number of land-cover classes and the size of the moving window for spatial unmixing. Results are still being evaluated, with particular attention for the quality of the data streams required to apply our approach.
Global Governance 2025: At a Critical Juncture
2010-09-01
them. Formal institutions remain largely unreformed and Western states probably must shoulder a disproportionate share of “global governance” as...might be in a better position but may still be fiscally constrained if its budgetary shortfalls and long-term debt problems remain unresolved...that undermine both the environment and investment. Reliance on domestic reserves of fossil fuels or long- term access to foreign fields makes
Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem
NASA Astrophysics Data System (ADS)
Chen, Wei
2015-07-01
In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
Alpine hydropower in a low carbon economy: Assessing the local implication of global policies
NASA Astrophysics Data System (ADS)
Anghileri, Daniela; Castelletti, Andrea; Burlando, Paolo
2016-04-01
In the global transition towards a more efficient and low-carbon economy, renewable energy plays a major role in displacing fossil fuels, meeting global energy demand while reducing carbon dioxide emissions. In Europe, Variable Renewable Sources (VRS), such as wind and solar power sources, are becoming a relevant share of the generation portfolios in many countries. Beside the indisputable social and environmental advantages of VRS, on the short medium term the VRS-induced lowering energy prices and increasing price's volatility might challenge traditional power sources and, among them, hydropower production, because of smaller incomes and higher maintenance costs associated to a more flexible operation of power systems. In this study, we focus on the Swiss hydropower sector analysing how different low-carbon targets and strategies established at the Swiss and European level might affect energy price formation and thus impact - through hydropower operation - water availability and ecosystems services at the catchment scale. We combine a hydrological model to simulate future water availability and an electricity market model to simulate future evolution of energy prices based on official Swiss and European energy roadmaps and CO2 price trends in the European Union. We use Multi-Objective optimization techniques to design alternative hydropower reservoir operation strategies, aiming to maximise the hydropower companies' income or to provide reliable energy supply with respect to the energy demand. This integrated model allows analysing to which extent global low-carbon policies impact reservoir operation at the local scale, and to gain insight on how to prioritise compensation measures and/or adaptation strategies to mitigate the impact of VRS on hydropower companies in increasingly water constrained settings. Numerical results are shown for a real-world case study in the Swiss Alps.
An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane
NASA Technical Reports Server (NTRS)
Lu, Ping
1992-01-01
The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.
NASA Technical Reports Server (NTRS)
Sauer, Carl G., Jr.
1989-01-01
A patched conic trajectory optimization program MIDAS is described that was developed to investigate a wide variety of complex ballistic heliocentric transfer trajectories. MIDAS includes the capability of optimizing trajectory event times such as departure date, arrival date, and intermediate planetary flyby dates and is able to both add and delete deep space maneuvers when dictated by the optimization process. Both powered and unpowered flyby or gravity assist trajectories of intermediate bodies can be handled and capability is included to optimize trajectories having a rendezvous with an intermediate body such as for a sample return mission. Capability is included in the optimization process to constrain launch energy and launch vehicle parking orbit parameters.
Regional reanalysis without local data: Exploiting the downscaling paradigm
NASA Astrophysics Data System (ADS)
von Storch, Hans; Feser, Frauke; Geyer, Beate; Klehmet, Katharina; Li, Delei; Rockel, Burkhardt; Schubert-Frisius, Martina; Tim, Nele; Zorita, Eduardo
2017-08-01
This paper demonstrates two important aspects of regional dynamical downscaling of multidecadal atmospheric reanalysis. First, that in this way skillful regional descriptions of multidecadal climate variability may be constructed in regions with little or no local data. Second, that the concept of large-scale constraining allows global downscaling, so that global reanalyses may be completed by additions of consistent detail in all regions of the world. Global reanalyses suffer from inhomogeneities. However, their large-scale componenst are mostly homogeneous; Therefore, the concept of downscaling may be applied to homogeneously complement the large-scale state of the reanalyses with regional detail—wherever the condition of homogeneity of the description of large scales is fulfilled. Technically, this can be done by dynamical downscaling using a regional or global climate model, which's large scales are constrained by spectral nudging. This approach has been developed and tested for the region of Europe, and a skillful representation of regional weather risks—in particular marine risks—was identified. We have run this system in regions with reduced or absent local data coverage, such as Central Siberia, the Bohai and Yellow Sea, Southwestern Africa, and the South Atlantic. Also, a global simulation was computed, which adds regional features to prescribed global dynamics. Our cases demonstrate that spatially detailed reconstructions of the climate state and its change in the recent three to six decades add useful supplementary information to existing observational data for midlatitude and subtropical regions of the world.
NASA Technical Reports Server (NTRS)
Spangelo, Sara
2015-01-01
The goal of this paper is to explore the mission opportunities that are uniquely enabled by U-class Solar Electric Propulsion (SEP) technologies. Small SEP thrusters offers significant advantages relative to existing technologies and will revolutionize the class of mission architectures that small spacecraft can accomplish by enabling trajectory maneuvers with significant change in velocity requirements and reaction wheel-free attitude control. This paper aims to develop and apply a common system-level modeling framework to evaluate these thrusters for relevant upcoming mission scenarios, taking into account the mass, power, volume, and operational constraints of small highly-constrained missions. We will identify the optimal technology for broad classes of mission applications for different U-class spacecraft sizes and provide insights into what constrains the system performance to identify technology areas where improvements are needed.
NASA Technical Reports Server (NTRS)
Rogers, J. L.; Barthelemy, J.-F. M.
1986-01-01
An expert system called EXADS has been developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. ADS has approximately 100 combinations of strategy, optimizer, and one-dimensional search options from which to choose. It is difficult for a nonexpert to make this choice. This expert system aids the user in choosing the best combination of options based on the users knowledge of the problem and the expert knowledge stored in the knowledge base. The knowledge base is divided into three categories; constrained problems, unconstrained problems, and constrained problems being treated as unconstrained problems. The inference engine and rules are written in LISP, contains about 200 rules, and executes on DEC-VAX (with Franz-LISP) and IBM PC (with IQ-LISP) computers.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
A comparison of optimization algorithms for localized in vivo B0 shimming.
Nassirpour, Sahar; Chang, Paul; Fillmer, Ariane; Henning, Anke
2018-02-01
To compare several different optimization algorithms currently used for localized in vivo B 0 shimming, and to introduce a novel, fast, and robust constrained regularized algorithm (ConsTru) for this purpose. Ten different optimization algorithms (including samples from both generic and dedicated least-squares solvers, and a novel constrained regularized inversion method) were implemented and compared for shimming in five different shimming volumes on 66 in vivo data sets from both 7 T and 9.4 T. The best algorithm was chosen to perform single-voxel spectroscopy at 9.4 T in the frontal cortex of the brain on 10 volunteers. The results of the performance tests proved that the shimming algorithm is prone to unstable solutions if it depends on the value of a starting point, and is not regularized to handle ill-conditioned problems. The ConsTru algorithm proved to be the most robust, fast, and efficient algorithm among all of the chosen algorithms. It enabled acquisition of spectra of reproducible high quality in the frontal cortex at 9.4 T. For localized in vivo B 0 shimming, the use of a dedicated linear least-squares solver instead of a generic nonlinear one is highly recommended. Among all of the linear solvers, the constrained regularized method (ConsTru) was found to be both fast and most robust. Magn Reson Med 79:1145-1156, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Constrained Null Space Component Analysis for Semiblind Source Separation Problem.
Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn
2018-02-01
The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.
Image coding using entropy-constrained residual vector quantization
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Barnes, Christopher F.
1993-01-01
The residual vector quantization (RVQ) structure is exploited to produce a variable length codeword RVQ. Necessary conditions for the optimality of this RVQ are presented, and a new entropy-constrained RVQ (ECRVQ) design algorithm is shown to be very effective in designing RVQ codebooks over a wide range of bit rates and vector sizes. The new EC-RVQ has several important advantages. It can outperform entropy-constrained VQ (ECVQ) in terms of peak signal-to-noise ratio (PSNR), memory, and computation requirements. It can also be used to design high rate codebooks and codebooks with relatively large vector sizes. Experimental results indicate that when the new EC-RVQ is applied to image coding, very high quality is achieved at relatively low bit rates.
MacBean, Natasha; Maignan, Fabienne; Bacour, Cédric; Lewis, Philip; Peylin, Philippe; Guanter, Luis; Köhler, Philipp; Gómez-Dans, Jose; Disney, Mathias
2018-01-31
Accurate terrestrial biosphere model (TBM) simulations of gross carbon uptake (gross primary productivity - GPP) are essential for reliable future terrestrial carbon sink projections. However, uncertainties in TBM GPP estimates remain. Newly-available satellite-derived sun-induced chlorophyll fluorescence (SIF) data offer a promising direction for addressing this issue by constraining regional-to-global scale modelled GPP. Here, we use monthly 0.5° GOME-2 SIF data from 2007 to 2011 to optimise GPP parameters of the ORCHIDEE TBM. The optimisation reduces GPP magnitude across all vegetation types except C4 plants. Global mean annual GPP therefore decreases from 194 ± 57 PgCyr -1 to 166 ± 10 PgCyr -1 , bringing the model more in line with an up-scaled flux tower estimate of 133 PgCyr -1 . Strongest reductions in GPP are seen in boreal forests: the result is a shift in global GPP distribution, with a ~50% increase in the tropical to boreal productivity ratio. The optimisation resulted in a greater reduction in GPP than similar ORCHIDEE parameter optimisation studies using satellite-derived NDVI from MODIS and eddy covariance measurements of net CO 2 fluxes from the FLUXNET network. Our study shows that SIF data will be instrumental in constraining TBM GPP estimates, with a consequent improvement in global carbon cycle projections.
Airborne Remote sensing of the OH tropospheric column with an Integrated Path Differential LIDAR.
NASA Astrophysics Data System (ADS)
Hanisco, T. F.; Liang, Q.; Nicely, J. M.; Brune, W. H.; Miller, D. O.; Thames, A. B.
2017-12-01
The Hydroxyl radical, OH, is central to the photochemistry that controls tropospheric oxidation including the removal of atmospheric methane. Measurements of this important species are thus critical to testing our understanding and for constraining model results. Until now, tropospheric measurements have been limited to airborne or ground-based in situ instruments best suited to test photochemical box models. However, because of the growing recognition of the importance of the global methane abundance, we have a growing need to better quantify OH at the regional to global scales that are best sampled with airborne or space-based remote sensing instruments. To address this need, we have developed an instrument concept and have begun work on a laser transmitter for an airborne integrated path differential absorption LIDAR for the detection of OH. We will describe the instrument and present the expected performance characteristics. As a demonstration, we will use measurements from the recent ATOM-1 NASA airborne campaign to show measured OH columns can be used to constrain regional and global models.
Climate mitigation: sustainable preferences and cumulative carbon
NASA Astrophysics Data System (ADS)
Buckle, Simon
2010-05-01
We develop a stylized AK growth model with both climate damages to ecosystem goods and services and sustainable preferences that allow trade-offs between present discounted utility and long-run climate damages. The simplicity of the model permits analytical solutions. Concern for the long-term provides a strong driver for mitigation action. One plausible specification of sustainable preferences leads to the result that, for a range of initial parameter values, an optimizing agent would choose a level of cumulative carbon dioxide (CO2) emissions independent of initial production capital endowment and CO2 levels. There is no technological change so, for economies with sufficiently high initial capital and CO2 endowments, optimal mitigation will lead to disinvestment. For lower values of initial capital and/or CO2 levels, positive investment can be optimal, but still within the same overall level of cumulative emissions. One striking aspect of the model is the complexity of possible outcomes, in addition to these optimal solutions. We also identify a resource constrained region and several regions where climate damages exceed resources available for consumption. Other specifications of sustainable preferences are discussed, as is the case of a hard constraint on long-run damages. Scientists are currently highlighting the potential importance of the cumulative carbon emissions concept as a robust yet flexible target for climate policymakers. This paper shows that it also has an ethical interpretation: it embodies an implicit trade off in global welfare between present discounted welfare and long-term climate damages. We hope that further development of the ideas presented here might contribute to the research and policy debate on the critical areas of intra- and intergenerational welfare.
NASA Astrophysics Data System (ADS)
Wang, C.; Gordon, R. G.; Zheng, L.
2016-12-01
Hotspot tracks are widely used to estimate the absolute velocities of plates, i.e., relative to the lower mantle. Knowledge of current motion between hotspots is important for both plate kinematics and mantle dynamics and informs the discussion on the origin of the Hawaiian-Emperor Bend. Following Morgan & Morgan (2007), we focus only on the trends of young hotspot tracks and omit volcanic propagation rates. The dispersion of the trends can be partitioned into between-plate and within-plate dispersion. Applying the method of Gripp & Gordon (2002) to the hotspot trend data set of Morgan & Morgan (2007) constrained to the MORVEL relative plate angular velocities (DeMets et al., 2010) results in a standard deviation of the 56 hotspot trends of 22°. The largest angular misfits tend to occur on the slowest moving plates. Alternatively, estimation of best-fitting poles to hotspot tracks on the nine individual plates, results in a standard deviation of trends of only 13°, a statistically significant reduction from the introduction of 15 additional adjustable parameters. If all of the between-plate misfit is due to motion of groups of hotspots (beneath different plates), nominal velocities relative to the mean hotspot reference frame range from 1 to 4 mm/yr with the lower bounds ranging from 1 to 3 mm/yr and the greatest upper bound being 8 mm/yr. These are consistent with bounds on motion between Pacific and Indo-Atlantic hotspots over the past ≈50 Ma, which range from zero (lower bound) to 8 to 13 mm/yr (upper bounds) (Koivisto et al., 2014). We also determine HS4-MORVEL, a new global set of plate angular velocities relative to the hotspots constrained to consistency with the MORVEL relative plate angular velocities, using a two-tier analysis similar to that used by Zheng et al. (2014) to estimate the SKS-MORVEL global set of absolute plate velocities fit to the orientation of seismic anisotropy. We find that the 95% confidence limits of HS4-MORVEL and SKS-MORVEL overlap substantially and that the two sets of angular velocities differ insignificantly. Thus we combine the two sets of angular velocities to estimate ABS-MORVEL, an optimal set of global angular velocities consistent with both hotspot tracks and seismic anisotropy. ABS-MORVEL has more compact confidence limits than either SKS-MORVEL or HS4-MORVEL.
Correction method for stripe nonuniformity.
Qian, Weixian; Chen, Qian; Gu, Guohua; Guan, Zhiqiang
2010-04-01
Stripe nonuniformity is very typical in line infrared focal plane arrays (IR-FPA) and uncooled staring IR-FPA. In this paper, the mechanism of the stripe nonuniformity is analyzed, and the gray-scale co-occurrence matrix theory and optimization theory are studied. Through these efforts, the stripe nonuniformity correction problem is translated into the optimization problem. The goal of the optimization is to find the minimal energy of the image's line gradient. After solving the constrained nonlinear optimization equation, the parameters of the stripe nonuniformity correction are obtained and the stripe nonuniformity correction is achieved. The experiments indicate that this algorithm is effective and efficient.
Constrained trajectory optimization for kinematically redundant arms
NASA Technical Reports Server (NTRS)
Carignan, Craig R.; Tarrant, Janice M.
1990-01-01
Two velocity optimization schemes for resolving redundant joint configurations are compared. The Extended Moore-Penrose Technique minimizes the joint velocities and avoids obstacles indirectly by adjoining a cost gradient to the solution. A new method can incorporate inequality constraints directly to avoid obstacles and singularities in the workspace. A four-link arm example is used to illustrate singularity avoidance while tracking desired end-effector paths.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping
2018-01-01
An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.
Ou, Guoliang; Tan, Shukui; Zhou, Min; Lu, Shasha; Tao, Yinghui; Zhang, Zuo; Zhang, Lu; Yan, Danping; Guan, Xingliang; Wu, Gang
2017-12-15
An interval chance-constrained fuzzy land-use allocation (ICCF-LUA) model is proposed in this study to support solving land resource management problem associated with various environmental and ecological constraints at a watershed level. The ICCF-LUA model is based on the ICCF (interval chance-constrained fuzzy) model which is coupled with interval mathematical model, chance-constrained programming model and fuzzy linear programming model and can be used to deal with uncertainties expressed as intervals, probabilities and fuzzy sets. Therefore, the ICCF-LUA model can reflect the tradeoff between decision makers and land stakeholders, the tradeoff between the economical benefits and eco-environmental demands. The ICCF-LUA model has been applied to the land-use allocation of Wujiang watershed, Guizhou Province, China. The results indicate that under highly land suitable conditions, optimized area of cultivated land, forest land, grass land, construction land, water land, unused land and landfill in Wujiang watershed will be [5015, 5648] hm 2 , [7841, 7965] hm 2 , [1980, 2056] hm 2 , [914, 1423] hm 2 , [70, 90] hm 2 , [50, 70] hm 2 and [3.2, 4.3] hm 2 , the corresponding system economic benefit will be between 6831 and 7219 billion yuan. Consequently, the ICCF-LUA model can effectively support optimized land-use allocation problem in various complicated conditions which include uncertainties, risks, economic objective and eco-environmental constraints. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Lu, Yanrong; Liao, Fucheng; Deng, Jiamei; Liu, Huiyang
2017-09-01
This paper investigates the cooperative global optimal preview tracking problem of linear multi-agent systems under the assumption that the output of a leader is a previewable periodic signal and the topology graph contains a directed spanning tree. First, a type of distributed internal model is introduced, and the cooperative preview tracking problem is converted to a global optimal regulation problem of an augmented system. Second, an optimal controller, which can guarantee the asymptotic stability of the augmented system, is obtained by means of the standard linear quadratic optimal preview control theory. Third, on the basis of proving the existence conditions of the controller, sufficient conditions are given for the original problem to be solvable, meanwhile a cooperative global optimal controller with error integral and preview compensation is derived. Finally, the validity of theoretical results is demonstrated by a numerical simulation.
Dispositional optimism and sleep quality: a test of mediating pathways
Cribbet, Matthew; Kent de Grey, Robert G.; Cronan, Sierra; Trettevik, Ryan; Smith, Timothy W.
2016-01-01
Dispositional optimism has been related to beneficial influences on physical health outcomes. However, its links to global sleep quality and the psychological mediators responsible for such associations are less studied. This study thus examined if trait optimism predicted global sleep quality, and if measures of subjective well-being were statistical mediators of such links. A community sample of 175 participants (93 men, 82 women) completed measures of trait optimism, depression, and life satisfaction. Global sleep quality was assessed using the Pittsburgh Sleep Quality Index. Results indicated that trait optimism was a strong predictor of better PSQI global sleep quality. Moreover, this association was mediated by depression and life satisfaction in both single and multiple mediator models. These results highlight the importance of optimism for the restorative process of sleep, as well as the utility of multiple mediator models in testing distinct psychological pathways. PMID:27592128
Dispositional optimism and sleep quality: a test of mediating pathways.
Uchino, Bert N; Cribbet, Matthew; de Grey, Robert G Kent; Cronan, Sierra; Trettevik, Ryan; Smith, Timothy W
2017-04-01
Dispositional optimism has been related to beneficial influences on physical health outcomes. However, its links to global sleep quality and the psychological mediators responsible for such associations are less studied. This study thus examined if trait optimism predicted global sleep quality, and if measures of subjective well-being were statistical mediators of such links. A community sample of 175 participants (93 men, 82 women) completed measures of trait optimism, depression, and life satisfaction. Global sleep quality was assessed using the Pittsburgh Sleep Quality Index. Results indicated that trait optimism was a strong predictor of better PSQI global sleep quality. Moreover, this association was mediated by depression and life satisfaction in both single and multiple mediator models. These results highlight the importance of optimism for the restorative process of sleep, as well as the utility of multiple mediator models in testing distinct psychological pathways.
Xia, Yangkun; Fu, Zhuo; Pan, Lijun; Duan, Fenghua
2018-01-01
The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can't be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution.
Xia, Yangkun; Pan, Lijun; Duan, Fenghua
2018-01-01
The vehicle routing problem (VRP) has a wide range of applications in the field of logistics distribution. In order to reduce the cost of logistics distribution, the distance-constrained and capacitated VRP with split deliveries by order (DCVRPSDO) was studied. We show that the customer demand, which can’t be split in the classical VRP model, can only be discrete split deliveries by order. A model of double objective programming is constructed by taking the minimum number of vehicles used and minimum vehicle traveling cost as the first and the second objective, respectively. This approach contains a series of constraints, such as single depot, single vehicle type, distance-constrained and load capacity limit, split delivery by order, etc. DCVRPSDO is a new type of VRP. A new tabu search algorithm is designed to solve the problem and the examples testing show the efficiency of the proposed algorithm. This paper focuses on constructing a double objective mathematical programming model for DCVRPSDO and designing an adaptive tabu search algorithm (ATSA) with good performance to solving the problem. The performance of the ATSA is improved by adding some strategies into the search process, including: (a) a strategy of discrete split deliveries by order is used to split the customer demand; (b) a multi-neighborhood structure is designed to enhance the ability of global optimization; (c) two levels of evaluation objectives are set to select the current solution and the best solution; (d) a discriminating strategy of that the best solution must be feasible and the current solution can accept some infeasible solution, helps to balance the performance of the solution and the diversity of the neighborhood solution; (e) an adaptive penalty mechanism will help the candidate solution be closer to the neighborhood of feasible solution; (f) a strategy of tabu releasing is used to transfer the current solution into a new neighborhood of the better solution. PMID:29763419
Orbit design and optimization based on global telecommunication performance metrics
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Lee, Charles H.; Kerridge, Stuart; Cheung, Kar-Ming; Edwards, Charles D.
2006-01-01
The orbit selection of telecommunications orbiters is one of the critical design processes and should be guided by global telecom performance metrics and mission-specific constraints. In order to aid the orbit selection, we have coupled the Telecom Orbit Analysis and Simulation Tool (TOAST) with genetic optimization algorithms. As a demonstration, we have applied the developed tool to select an optimal orbit for general Mars telecommunications orbiters with the constraint of being a frozen orbit. While a typical optimization goal is to minimize tele-communications down time, several relevant performance metrics are examined: 1) area-weighted average gap time, 2) global maximum of local maximum gap time, 3) global maximum of local minimum gap time. Optimal solutions are found with each of the metrics. Common and different features among the optimal solutions as well as the advantage and disadvantage of each metric are presented. The optimal solutions are compared with several candidate orbits that were considered during the development of Mars Telecommunications Orbiter.
Transport and percolation in complex networks
NASA Astrophysics Data System (ADS)
Li, Guanliang
To design complex networks with optimal transport properties such as flow efficiency, we consider three approaches to understanding transport and percolation in complex networks. We analyze the effects of randomizing the strengths of connections, randomly adding long-range connections to regular lattices, and percolation of spatially constrained networks. Various real-world networks often have links that are differentiated in terms of their strength, intensity, or capacity. We study the distribution P(σ) of the equivalent conductance for Erdoḧs-Rényi (ER) and scale-free (SF) weighted resistor networks with N nodes, for which links are assigned with conductance σ i ≡ e-axi, where xi is a random variable with 0 < xi < 1. We find, both analytically and numerically, that P(σ) for ER networks exhibits two regimes: (i) For σ < e-apc, P(σ) is independent of N and scales as a power law P(σ) ˜ sk/a-1 . Here pc = 1/
Kesselman, Andrew; Soroosh, Garshasb; Mollura, Daniel J
2016-09-01
Radiology in low- and middle-income (developing) countries continues to make progress. Research and international outreach projects presented at the 2015 annual RAD-AID conference emphasize important global themes, including (1) recent slowing of emerging market growth that threatens to constrain the advance of radiology, (2) increasing global noncommunicable diseases (such as cancer and cardiovascular disease) needing radiology for detection and management, (3) strategic prioritization for pediatric radiology in global public health initiatives, (4) continuous expansion of global health curricula at radiology residencies and the RAD-AID Chapter Network's participating institutions, and (5) technologic innovation for recently accelerated implementation of PACS in low-resource countries. Published by Elsevier Inc.
A Measure Approximation for Distributionally Robust PDE-Constrained Optimization Problems
Kouri, Drew Philip
2017-12-19
In numerous applications, scientists and engineers acquire varied forms of data that partially characterize the inputs to an underlying physical system. This data is then used to inform decisions such as controls and designs. Consequently, it is critical that the resulting control or design is robust to the inherent uncertainties associated with the unknown probabilistic characterization of the model inputs. Here in this work, we consider optimal control and design problems constrained by partial differential equations with uncertain inputs. We do not assume a known probabilistic model for the inputs, but rather we formulate the problem as a distributionally robustmore » optimization problem where the outer minimization problem determines the control or design, while the inner maximization problem determines the worst-case probability measure that matches desired characteristics of the data. We analyze the inner maximization problem in the space of measures and introduce a novel measure approximation technique, based on the approximation of continuous functions, to discretize the unknown probability measure. Finally, we prove consistency of our approximated min-max problem and conclude with numerical results.« less
Gao, Yuan; Zhou, Weigui; Ao, Hong; Chu, Jian; Zhou, Quan; Zhou, Bo; Wang, Kang; Li, Yi; Xue, Peng
2016-01-01
With the increasing demands for better transmission speed and robust quality of service (QoS), the capacity constrained backhaul gradually becomes a bottleneck in cooperative wireless networks, e.g., in the Internet of Things (IoT) scenario in joint processing mode of LTE-Advanced Pro. This paper focuses on resource allocation within capacity constrained backhaul in uplink cooperative wireless networks, where two base stations (BSs) equipped with single antennae serve multiple single-antennae users via multi-carrier transmission mode. In this work, we propose a novel cooperative transmission scheme based on compress-and-forward with user pairing to solve the joint mixed integer programming problem. To maximize the system capacity under the limited backhaul, we formulate the joint optimization problem of user sorting, subcarrier mapping and backhaul resource sharing among different pairs (subcarriers for users). A novel robust and efficient centralized algorithm based on alternating optimization strategy and perfect mapping is proposed. Simulations show that our novel method can improve the system capacity significantly under the constraint of the backhaul resource compared with the blind alternatives. PMID:27077865
An approximation function for frequency constrained structural optimization
NASA Technical Reports Server (NTRS)
Canfield, R. A.
1989-01-01
The purpose is to examine a function for approximating natural frequency constraints during structural optimization. The nonlinearity of frequencies has posed a barrier to constructing approximations for frequency constraints of high enough quality to facilitate efficient solutions. A new function to represent frequency constraints, called the Rayleigh Quotient Approximation (RQA), is presented. Its ability to represent the actual frequency constraint results in stable convergence with effectively no move limits. The objective of the optimization problem is to minimize structural weight subject to some minimum (or maximum) allowable frequency and perhaps subject to other constraints such as stress, displacement, and gage size, as well. A reason for constraining natural frequencies during design might be to avoid potential resonant frequencies due to machinery or actuators on the structure. Another reason might be to satisy requirements of an aircraft or spacecraft's control law. Whatever the structure supports may be sensitive to a frequency band that must be avoided. Any of these situations or others may require the designer to insure the satisfaction of frequency constraints. A further motivation for considering accurate approximations of natural frequencies is that they are fundamental to dynamic response constraints.
UAV path planning using artificial potential field method updated by optimal control theory
NASA Astrophysics Data System (ADS)
Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long
2016-04-01
The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.
NASA Astrophysics Data System (ADS)
Akmaev, R. a.
1999-04-01
In Part 1 of this work ([Akmaev, 1999]), an overview of the theory of optimal interpolation (OI) ([Gandin, 1963]) and related techniques of data assimilation based on linear optimal estimation ([Liebelt, 1967]; [Catlin, 1989]; [Mendel, 1995]) is presented. The approach implies the use in data analysis of additional statistical information in the form of statistical moments, e.g., the mean and covariance (correlation). The a priori statistical characteristics, if available, make it possible to constrain expected errors and obtain optimal in some sense estimates of the true state from a set of observations in a given domain in space and/or time. The primary objective of OI is to provide estimates away from the observations, i.e., to fill in data voids in the domain under consideration. Additionally, OI performs smoothing suppressing the noise, i.e., the spectral components that are presumably not present in the true signal. Usually, the criterion of optimality is minimum variance of the expected errors and the whole approach may be considered constrained least squares or least squares with a priori information. Obviously, data assimilation techniques capable of incorporating any additional information are potentially superior to techniques that have no access to such information as, for example, the conventional least squares (e.g., [Liebelt, 1967]; [Weisberg, 1985]; [Press et al., 1992]; [Mendel, 1995]).
Multiple-copy state discrimination: Thinking globally, acting locally
NASA Astrophysics Data System (ADS)
Higgins, B. L.; Doherty, A. C.; Bartlett, S. D.; Pryde, G. J.; Wiseman, H. M.
2011-05-01
We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N→∞. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.
Multiple-copy state discrimination: Thinking globally, acting locally
DOE Office of Scientific and Technical Information (OSTI.GOV)
Higgins, B. L.; Pryde, G. J.; Wiseman, H. M.
2011-05-15
We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies N, and in the asymptotic limit as N{yields}{infinity}. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements,more » and globally optimal adaptive local measurements. Here an adaptive measurement is one in which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite N) and the scaling of this error in the asymptotic limit. In the asymptotic limit, it is known analytically (and we verify numerically) that adaptive schemes have no advantage over the optimal fixed local scheme. Here we show moreover that, in this limit, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme except for states with less than 2% mixture. For finite N, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme for any degree of mixture.« less