Sample records for optimal control framework

  1. Intelligent and robust optimization frameworks for smart grids

    NASA Astrophysics Data System (ADS)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic optimization algorithm for smart grid automatic generation control.

  2. Finite dimensional approximation of a class of constrained nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Hou, L. S.

    1994-01-01

    An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.

  3. A Framework for Optimal Control Allocation with Structural Load Constraints

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Taylor, Brian R.; Jutte, Christine V.; Burken, John J.; Trinh, Khanh V.; Bodson, Marc

    2010-01-01

    Conventional aircraft generally employ mixing algorithms or lookup tables to determine control surface deflections needed to achieve moments commanded by the flight control system. Control allocation is the problem of converting desired moments into control effector commands. Next generation aircraft may have many multipurpose, redundant control surfaces, adding considerable complexity to the control allocation problem. These issues can be addressed with optimal control allocation. Most optimal control allocation algorithms have control surface position and rate constraints. However, these constraints are insufficient to ensure that the aircraft's structural load limits will not be exceeded by commanded surface deflections. In this paper, a framework is proposed to enable a flight control system with optimal control allocation to incorporate real-time structural load feedback and structural load constraints. A proof of concept simulation that demonstrates the framework in a simulation of a generic transport aircraft is presented.

  4. An optimal control framework for dynamic induction control of wind farms and their interaction with the atmospheric boundary layer.

    PubMed

    Munters, W; Meyers, J

    2017-04-13

    Complex turbine wake interactions play an important role in overall energy extraction in large wind farms. Current control strategies optimize individual turbine power, and lead to significant energy losses in wind farms compared with lone-standing wind turbines. In recent work, an optimal coordinated control framework was introduced (Goit & Meyers 2015 J. Fluid Mech. 768 , 5-50 (doi:10.1017/jfm.2015.70)). Here, we further elaborate on this framework, quantify the influence of optimization parameters and introduce new simulation results for which gains in power production of up to 21% are observed.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Authors.

  5. An optimal control framework for dynamic induction control of wind farms and their interaction with the atmospheric boundary layer

    PubMed Central

    Munters, W.

    2017-01-01

    Complex turbine wake interactions play an important role in overall energy extraction in large wind farms. Current control strategies optimize individual turbine power, and lead to significant energy losses in wind farms compared with lone-standing wind turbines. In recent work, an optimal coordinated control framework was introduced (Goit & Meyers 2015 J. Fluid Mech. 768, 5–50 (doi:10.1017/jfm.2015.70)). Here, we further elaborate on this framework, quantify the influence of optimization parameters and introduce new simulation results for which gains in power production of up to 21% are observed. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265024

  6. Comparing, optimizing, and benchmarking quantum-control algorithms in a unifying programming framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machnes, S.; Institute for Theoretical Physics, University of Ulm, D-89069 Ulm; Sander, U.

    2011-08-15

    For paving the way to novel applications in quantum simulation, computation, and technology, increasingly large quantum systems have to be steered with high precision. It is a typical task amenable to numerical optimal control to turn the time course of pulses, i.e., piecewise constant control amplitudes, iteratively into an optimized shape. Here, we present a comparative study of optimal-control algorithms for a wide range of finite-dimensional applications. We focus on the most commonly used algorithms: GRAPE methods which update all controls concurrently, and Krotov-type methods which do so sequentially. Guidelines for their use are given and open research questions aremore » pointed out. Moreover, we introduce a unifying algorithmic framework, DYNAMO (dynamic optimization platform), designed to provide the quantum-technology community with a convenient matlab-based tool set for optimal control. In addition, it gives researchers in optimal-control techniques a framework for benchmarking and comparing newly proposed algorithms with the state of the art. It allows a mix-and-match approach with various types of gradients, update and step-size methods as well as subspace choices. Open-source code including examples is made available at http://qlib.info.« less

  7. An Optimization Framework for Driver Feedback Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malikopoulos, Andreas; Aguilar, Juan P.

    2013-01-01

    Modern vehicles have sophisticated electronic control units that can control engine operation with discretion to balance fuel economy, emissions, and power. These control units are designed for specific driving conditions (e.g., different speed profiles for highway and city driving). However, individual driving styles are different and rarely match the specific driving conditions for which the units were designed. In the research reported here, we investigate driving-style factors that have a major impact on fuel economy and construct an optimization framework to optimize individual driving styles with respect to these driving factors. In this context, we construct a set of polynomialmore » metamodels to reflect the responses produced in fuel economy by changing the driving factors. Then, we compare the optimized driving styles to the original driving styles and evaluate the effectiveness of the optimization framework. Finally, we use this proposed framework to develop a real-time feedback system, including visual instructions, to enable drivers to alter their driving styles in response to actual driving conditions to improve fuel efficiency.« less

  8. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  9. A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at; Tuffaha, Amjad, E-mail: atufaha@aus.edu

    We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solutionmore » of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.« less

  10. Economic-Oriented Stochastic Optimization in Advanced Process Control of Chemical Processes

    PubMed Central

    Dobos, László; Király, András; Abonyi, János

    2012-01-01

    Finding the optimal operating region of chemical processes is an inevitable step toward improving economic performance. Usually the optimal operating region is situated close to process constraints related to product quality or process safety requirements. Higher profit can be realized only by assuring a relatively low frequency of violation of these constraints. A multilevel stochastic optimization framework is proposed to determine the optimal setpoint values of control loops with respect to predetermined risk levels, uncertainties, and costs of violation of process constraints. The proposed framework is realized as direct search-type optimization of Monte-Carlo simulation of the controlled process. The concept is illustrated throughout by a well-known benchmark problem related to the control of a linear dynamical system and the model predictive control of a more complex nonlinear polymerization process. PMID:23213298

  11. A duality framework for stochastic optimal control of complex systems

    DOE PAGES

    Malikopoulos, Andreas A.

    2016-01-01

    In this study, we address the problem of minimizing the long-run expected average cost of a complex system consisting of interactive subsystems. We formulate a multiobjective optimization problem of the one-stage expected costs of the subsystems and provide a duality framework to prove that the control policy yielding the Pareto optimal solution minimizes the average cost criterion of the system. We provide the conditions of existence and a geometric interpretation of the solution. For practical situations having constraints consistent with those studied here, our results imply that the Pareto control policy may be of value when we seek to derivemore » online the optimal control policy in complex systems.« less

  12. Planning Framework for Mesolevel Optimization of Urban Runoff Control Schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Qianqian; Blohm, Andrew; Liu, Bo

    A planning framework is developed to optimize runoff control schemes at scales relevant for regional planning at an early stage. The framework employs less sophisticated modeling approaches to allow a practical application in developing regions with limited data sources and computing capability. The methodology contains three interrelated modules: (1)the geographic information system (GIS)-based hydrological module, which aims at assessing local hydrological constraints and potential for runoff control according to regional land-use descriptions; (2)the grading module, which is built upon the method of fuzzy comprehensive evaluation. It is used to establish a priority ranking system to assist the allocation of runoffmore » control targets at the subdivision level; and (3)the genetic algorithm-based optimization module, which is included to derive Pareto-based optimal solutions for mesolevel allocation with multiple competing objectives. The optimization approach describes the trade-off between different allocation plans and simultaneously ensures that all allocation schemes satisfy the minimum requirement on runoff control. Our results highlight the importance of considering the mesolevel allocation strategy in addition to measures at macrolevels and microlevels in urban runoff management. (C) 2016 American Society of Civil Engineers.« less

  13. Uncertainty, learning, and the optimal management of wildlife

    USGS Publications Warehouse

    Williams, B.K.

    2001-01-01

    Wildlife management is limited by uncontrolled and often unrecognized environmental variation, by limited capabilities to observe and control animal populations, and by a lack of understanding about the biological processes driving population dynamics. In this paper I describe a comprehensive framework for management that includes multiple models and likelihood values to account for structural uncertainty, along with stochastic factors to account for environmental variation, random sampling, and partial controllability. Adaptive optimization is developed in terms of the optimal control of incompletely understood populations, with the expected value of perfect information measuring the potential for improving control through learning. The framework for optimal adaptive control is generalized by including partial observability and non-adaptive, sample-based updating of model likelihoods. Passive adaptive management is derived as a special case of constrained adaptive optimization, representing a potentially efficient suboptimal alternative that nonetheless accounts for structural uncertainty.

  14. Autonomous Energy Grids | Grid Modernization | NREL

    Science.gov Websites

    control themselves using advanced machine learning and simulation to create resilient, reliable, and affordable optimized energy systems. Current frameworks to monitor, control, and optimize large-scale energy of optimization theory, control theory, big data analytics, and complex system theory and modeling to

  15. A novel framework for virtual prototyping of rehabilitation exoskeletons.

    PubMed

    Agarwal, Priyanshu; Kuo, Pei-Hsin; Neptune, Richard R; Deshpande, Ashish D

    2013-06-01

    Human-worn rehabilitation exoskeletons have the potential to make therapeutic exercises increasingly accessible to disabled individuals while reducing the cost and labor involved in rehabilitation therapy. In this work, we propose a novel human-model-in-the-loop framework for virtual prototyping (design, control and experimentation) of rehabilitation exoskeletons by merging computational musculoskeletal analysis with simulation-based design techniques. The framework allows to iteratively optimize design and control algorithm of an exoskeleton using simulation. We introduce biomechanical, morphological, and controller measures to quantify the performance of the device for optimization study. Furthermore, the framework allows one to carry out virtual experiments for testing specific "what-if" scenarios to quantify device performance and recovery progress. To illustrate the application of the framework, we present a case study wherein the design and analysis of an index-finger exoskeleton is carried out using the proposed framework.

  16. Robustness Analysis and Optimally Robust Control Design via Sum-of-Squares

    NASA Technical Reports Server (NTRS)

    Dorobantu, Andrei; Crespo, Luis G.; Seiler, Peter J.

    2012-01-01

    A control analysis and design framework is proposed for systems subject to parametric uncertainty. The underlying strategies are based on sum-of-squares (SOS) polynomial analysis and nonlinear optimization to design an optimally robust controller. The approach determines a maximum uncertainty range for which the closed-loop system satisfies a set of stability and performance requirements. These requirements, de ned as inequality constraints on several metrics, are restricted to polynomial functions of the uncertainty. To quantify robustness, SOS analysis is used to prove that the closed-loop system complies with the requirements for a given uncertainty range. The maximum uncertainty range, calculated by assessing a sequence of increasingly larger ranges, serves as a robustness metric for the closed-loop system. To optimize the control design, nonlinear optimization is used to enlarge the maximum uncertainty range by tuning the controller gains. Hence, the resulting controller is optimally robust to parametric uncertainty. This approach balances the robustness margins corresponding to each requirement in order to maximize the aggregate system robustness. The proposed framework is applied to a simple linear short-period aircraft model with uncertain aerodynamic coefficients.

  17. Optimality conditions for the numerical solution of optimization problems with PDE constraints :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro; Ridzal, Denis

    2014-03-01

    A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.

  18. Optimal feedback control infinite dimensional parabolic evolution systems: Approximation techniques

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Wang, C.

    1989-01-01

    A general approximation framework is discussed for computation of optimal feedback controls in linear quadratic regular problems for nonautonomous parabolic distributed parameter systems. This is done in the context of a theoretical framework using general evolution systems in infinite dimensional Hilbert spaces. Conditions are discussed for preservation under approximation of stabilizability and detectability hypotheses on the infinite dimensional system. The special case of periodic systems is also treated.

  19. Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking.

    PubMed

    Yeoh, Ivan L; Reinhall, Per G; Berg, Martin C; Chizeck, Howard J; Seibel, Eric J

    2017-09-01

    A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.

  20. Hierarchical control framework for integrated coordination between distributed energy resources and demand response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Di; Lian, Jianming; Sun, Yannan

    Demand response is representing a significant but largely untapped resource that can greatly enhance the flexibility and reliability of power systems. In this paper, a hierarchical control framework is proposed to facilitate the integrated coordination between distributed energy resources and demand response. The proposed framework consists of coordination and device layers. In the coordination layer, various resource aggregations are optimally coordinated in a distributed manner to achieve the system-level objectives. In the device layer, individual resources are controlled in real time to follow the optimal power generation or consumption dispatched from the coordination layer. For the purpose of practical applications,more » a method is presented to determine the utility functions of controllable loads by taking into account the real-time load dynamics and the preferences of individual customers. The effectiveness of the proposed framework is validated by detailed simulation studies.« less

  1. Existence of Optimal Controls for Compressible Viscous Flow

    NASA Astrophysics Data System (ADS)

    Doboszczak, Stefan; Mohan, Manil T.; Sritharan, Sivaguru S.

    2018-03-01

    We formulate a control problem for a distributed parameter system where the state is governed by the compressible Navier-Stokes equations. Introducing a suitable cost functional, the existence of an optimal control is established within the framework of strong solutions in three dimensions.

  2. Online optimal obstacle avoidance for rotary-wing autonomous unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Kang, Keeryun

    This thesis presents an integrated framework for online obstacle avoidance of rotary-wing unmanned aerial vehicles (UAVs), which can provide UAVs an obstacle field navigation capability in a partially or completely unknown obstacle-rich environment. The framework is composed of a LIDAR interface, a local obstacle grid generation, a receding horizon (RH) trajectory optimizer, a global shortest path search algorithm, and a climb rate limit detection logic. The key feature of the framework is the use of an optimization-based trajectory generation in which the obstacle avoidance problem is formulated as a nonlinear trajectory optimization problem with state and input constraints over the finite range of the sensor. This local trajectory optimization is combined with a global path search algorithm which provides a useful initial guess to the nonlinear optimization solver. Optimization is the natural process of finding the best trajectory that is dynamically feasible, safe within the vehicle's flight envelope, and collision-free at the same time. The optimal trajectory is continuously updated in real time by the numerical optimization solver, Nonlinear Trajectory Generation (NTG), which is a direct solver based on the spline approximation of trajectory for dynamically flat systems. In fact, the overall approach of this thesis to finding the optimal trajectory is similar to the model predictive control (MPC) or the receding horizon control (RHC), except that this thesis followed a two-layer design; thus, the optimal solution works as a guidance command to be followed by the controller of the vehicle. The framework is implemented in a real-time simulation environment, the Georgia Tech UAV Simulation Tool (GUST), and integrated in the onboard software of the rotary-wing UAV test-bed at Georgia Tech. Initially, the 2D vertical avoidance capability of real obstacles was tested in flight. The flight test evaluations were extended to the benchmark tests for 3D avoidance capability over the virtual obstacles, and finally it was demonstrated on real obstacles located at the McKenna MOUT site in Fort Benning, Georgia. Simulations and flight test evaluations demonstrate the feasibility of the developed framework for UAV applications involving low-altitude flight in an urban area.

  3. Optimal Wastewater Loading under Conflicting Goals and Technology Limitations in a Riverine System.

    PubMed

    Rafiee, Mojtaba; Lyon, Steve W; Zahraie, Banafsheh; Destouni, Georgia; Jaafarzadeh, Nemat

    2017-03-01

      This paper investigates a novel simulation-optimization (S-O) framework for identifying optimal treatment levels and treatment processes for multiple wastewater dischargers to rivers. A commonly used water quality simulation model, Qual2K, was linked to a Genetic Algorithm optimization model for exploration of relevant fuzzy objective-function formulations for addressing imprecision and conflicting goals of pollution control agencies and various dischargers. Results showed a dynamic flow dependence of optimal wastewater loading with good convergence to near global optimum. Explicit considerations of real-world technological limitations, which were developed here in a new S-O framework, led to better compromise solutions between conflicting goals than those identified within traditional S-O frameworks. The newly developed framework, in addition to being more technologically realistic, is also less complicated and converges on solutions more rapidly than traditional frameworks. This technique marks a significant step forward for development of holistic, riverscape-based approaches that balance the conflicting needs of the stakeholders.

  4. Surrogate assisted multidisciplinary design optimization for an all-electric GEO satellite

    NASA Astrophysics Data System (ADS)

    Shi, Renhe; Liu, Li; Long, Teng; Liu, Jian; Yuan, Bin

    2017-09-01

    State-of-the-art all-electric geostationary earth orbit (GEO) satellites use electric thrusters to execute all propulsive duties, which significantly differ from the traditional all-chemical ones in orbit-raising, station-keeping, radiation damage protection, and power budget, etc. Design optimization task of an all-electric GEO satellite is therefore a complex multidisciplinary design optimization (MDO) problem involving unique design considerations. However, solving the all-electric GEO satellite MDO problem faces big challenges in disciplinary modeling techniques and efficient optimization strategy. To address these challenges, we presents a surrogate assisted MDO framework consisting of several modules, i.e., MDO problem definition, multidisciplinary modeling, multidisciplinary analysis (MDA), and surrogate assisted optimizer. Based on the proposed framework, the all-electric GEO satellite MDO problem is formulated to minimize the total mass of the satellite system under a number of practical constraints. Then considerable efforts are spent on multidisciplinary modeling involving geosynchronous transfer, GEO station-keeping, power, thermal control, attitude control, and structure disciplines. Since orbit dynamics models and finite element structural model are computationally expensive, an adaptive response surface surrogate based optimizer is incorporated in the proposed framework to solve the satellite MDO problem with moderate computational cost, where a response surface surrogate is gradually refined to represent the computationally expensive MDA process. After optimization, the total mass of the studied GEO satellite is decreased by 185.3 kg (i.e., 7.3% of the total mass). Finally, the optimal design is further discussed to demonstrate the effectiveness of our proposed framework to cope with the all-electric GEO satellite system design optimization problems. This proposed surrogate assisted MDO framework can also provide valuable references for other all-electric spacecraft system design.

  5. Multimodel methods for optimal control of aeroacoustics.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guoquan; Collis, Samuel Scott

    2005-01-01

    A new multidomain/multiphysics computational framework for optimal control of aeroacoustic noise has been developed based on a near-field compressible Navier-Stokes solver coupled with a far-field linearized Euler solver both based on a discontinuous Galerkin formulation. In this approach, the coupling of near- and far-field domains is achieved by weakly enforcing continuity of normal fluxes across a coupling surface that encloses all nonlinearities and noise sources. For optimal control, gradient information is obtained by the solution of an appropriate adjoint problem that involves the propagation of adjoint information from the far-field to the near-field. This computational framework has been successfully appliedmore » to study optimal boundary-control of blade-vortex interaction, which is a significant noise source for helicopters on approach to landing. In the model-problem presented here, the noise propagated toward the ground is reduced by 12dB.« less

  6. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Investigation of Optimal Control Allocation for Gust Load Alleviation in Flight Control

    NASA Technical Reports Server (NTRS)

    Frost, Susan A.; Taylor, Brian R.; Bodson, Marc

    2012-01-01

    Advances in sensors and avionics computation power suggest real-time structural load measurements could be used in flight control systems for improved safety and performance. A conventional transport flight control system determines the moments necessary to meet the pilot's command, while rejecting disturbances and maintaining stability of the aircraft. Control allocation is the problem of converting these desired moments into control effector commands. In this paper, a framework is proposed to incorporate real-time structural load feedback and structural load constraints in the control allocator. Constrained optimal control allocation can be used to achieve desired moments without exceeding specified limits on monitored load points. Minimization of structural loads by the control allocator is used to alleviate gust loads. The framework to incorporate structural loads in the flight control system and an optimal control allocation algorithm will be described and then demonstrated on a nonlinear simulation of a generic transport aircraft with flight dynamics and static structural loads.

  8. Multi-tasking arbitration and behaviour design for human-interactive robots

    NASA Astrophysics Data System (ADS)

    Kobayashi, Yuichi; Onishi, Masaki; Hosoe, Shigeyuki; Luo, Zhiwei

    2013-05-01

    Robots that interact with humans in household environments are required to handle multiple real-time tasks simultaneously, such as carrying objects, collision avoidance and conversation with human. This article presents a design framework for the control and recognition processes to meet these requirements taking into account stochastic human behaviour. The proposed design method first introduces a Petri net for synchronisation of multiple tasks. The Petri net formulation is converted to Markov decision processes and processed in an optimal control framework. Three tasks (safety confirmation, object conveyance and conversation) interact and are expressed by the Petri net. Using the proposed framework, tasks that normally tend to be designed by integrating many if-then rules can be designed in a systematic manner in a state estimation and optimisation framework from the viewpoint of the shortest time optimal control. The proposed arbitration method was verified by simulations and experiments using RI-MAN, which was developed for interactive tasks with humans.

  9. Expected frontiers: Incorporating weather uncertainty into a policy analysis using an integrated bi-level multi-objective optimization framework

    EPA Science Inventory

    Weather is the main driver in both plant use of nutrients and fate and transport of nutrients in the environment. In previous work, we evaluated a green tax for control of agricultural nutrients in a bi-level optimization framework that linked deterministic models. In this study,...

  10. The optimal location of piezoelectric actuators and sensors for vibration control of plates

    NASA Astrophysics Data System (ADS)

    Kumar, K. Ramesh; Narayanan, S.

    2007-12-01

    This paper considers the optimal placement of collocated piezoelectric actuator-sensor pairs on a thin plate using a model-based linear quadratic regulator (LQR) controller. LQR performance is taken as objective for finding the optimal location of sensor-actuator pairs. The problem is formulated using the finite element method (FEM) as multi-input-multi-output (MIMO) model control. The discrete optimal sensor and actuator location problem is formulated in the framework of a zero-one optimization problem. A genetic algorithm (GA) is used to solve the zero-one optimization problem. Different classical control strategies like direct proportional feedback, constant-gain negative velocity feedback and the LQR optimal control scheme are applied to study the control effectiveness.

  11. Transaction-Based Building Controls Framework, Volume 1: Reference Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somasundaram, Sriram; Pratt, Robert G.; Akyol, Bora A.

    This document proposes a framework concept to achieve the objectives of raising buildings’ efficiency and energy savings potential benefitting building owners and operators. We call it a transaction-based framework, wherein mutually-beneficial and cost-effective market-based transactions can be enabled between multiple players across different domains. Transaction-based building controls are one part of the transactional energy framework. While these controls realize benefits by enabling automatic, market-based intra-building efficiency optimizations, the transactional energy framework provides similar benefits using the same market -based structure, yet on a larger scale and beyond just buildings, to the society at large.

  12. Development of a new integrated local trajectory planning and tracking control framework for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Li, Xiaohui; Sun, Zhenping; Cao, Dongpu; Liu, Daxue; He, Hangen

    2017-03-01

    This study proposes a novel integrated local trajectory planning and tracking control (ILTPTC) framework for autonomous vehicles driving along a reference path with obstacles avoidance. For this ILTPTC framework, an efficient state-space sampling-based trajectory planning scheme is employed to smoothly follow the reference path. A model-based predictive path generation algorithm is applied to produce a set of smooth and kinematically-feasible paths connecting the initial state with the sampling terminal states. A velocity control law is then designed to assign a speed value at each of the points along the generated paths. An objective function considering both safety and comfort performance is carefully formulated for assessing the generated trajectories and selecting the optimal one. For accurately tracking the optimal trajectory while overcoming external disturbances and model uncertainties, a combined feedforward and feedback controller is developed. Both simulation analyses and vehicle testing are performed to verify the effectiveness of the proposed ILTPTC framework, and future research is also briefly discussed.

  13. Hodograph analysis in aircraft trajectory optimization

    NASA Technical Reports Server (NTRS)

    Cliff, Eugene M.; Seywald, Hans; Bless, Robert R.

    1993-01-01

    An account is given of key geometrical concepts involved in the use of a hodograph as an optimal control theory resource which furnishes a framework for geometrical interpretation of the minimum principle. Attention is given to the effects of different convexity properties on the hodograph, which bear on the existence of solutions and such types of controls as chattering controls, 'bang-bang' control, and/or singular control. Illustrative aircraft trajectory optimization problems are examined in view of this use of the hodograph.

  14. Ecosystem Services and Environmental Markets in ...

    EPA Pesticide Factsheets

    This report contains two separate analyses, both of which make use of an optimization framework previously developed to evaluate trade-offs in alternative restoration strategies to achieve the Chesapeake Bay Total Maximum Daily Load (TMDL). The first analysis expands on model applications that examine how incorporating selected co-benefits of nutrient reductions into the optimization framework alters the optimal distribution of nutrient reductions in the watershed (U.S. EPA, 2011). In previous applications, the analyzed co-benefits included carbon sequestration and recreational hunting benefits from certain agricultural best management practices (BMPs). In this report we expand the optimization framework to also include benefits from water quality improvements in freshwater river and streams. We find that these nontidal water quality co-benefits are larger than the other co-benefits combined and would result in greater nutrient control efforts in upstream portions of the watershed. Compared to cost-minimization results that do not account for co-benefits, including all co-benefits in the optimization would increase annual nutrient control costs by $16 million in the Susquehanna River Basin in Pennsylvania; however, the co-benefits would increase by $31 million, for a net gain of $15 million per year. In the James River Basin in Virginia, considering monetized co-benefits results in an estimated increase in nutrient control costs of $17 million but an increase in

  15. Optimization-Based Robust Nonlinear Control

    DTIC Science & Technology

    2006-08-01

    ABSTRACT New control algorithms were developed for robust stabilization of nonlinear dynamical systems . Novel, linear matrix inequality-based synthesis...was to further advance optimization-based robust nonlinear control design, for general nonlinear systems (especially in discrete time ), for linear...Teel, IEEE Transactions on Control Systems Technology, vol. 14, no. 3, p. 398-407, May 2006. 3. "A unified framework for input-to-state stability in

  16. CO 2 water-alternating-gas injection for enhanced oil recovery: Optimal well controls and half-cycle lengths

    DOE PAGES

    Chen, Bailian; Reynolds, Albert C.

    2018-03-11

    We report that CO 2 water-alternating-gas (WAG) injection is an enhanced oil recovery method designed to improve sweep efficiency during CO 2 injection with the injected water to control the mobility of CO 2 and to stabilize the gas front. Optimization of CO 2 -WAG injection is widely regarded as a viable technique for controlling the CO 2 and oil miscible process. Poor recovery from CO 2 -WAG injection can be caused by inappropriately designed WAG parameters. In previous study (Chen and Reynolds, 2016), we proposed an algorithm to optimize the well controls which maximize the life-cycle net-present-value (NPV). However,more » the effect of injection half-cycle lengths for each injector on oil recovery or NPV has not been well investigated. In this paper, an optimization framework based on augmented Lagrangian method and the newly developed stochastic-simplex-approximate-gradient (StoSAG) algorithm is proposed to explore the possibility of simultaneous optimization of the WAG half-cycle lengths together with the well controls. Finally, the proposed framework is demonstrated with three reservoir examples.« less

  17. CO 2 water-alternating-gas injection for enhanced oil recovery: Optimal well controls and half-cycle lengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Bailian; Reynolds, Albert C.

    We report that CO 2 water-alternating-gas (WAG) injection is an enhanced oil recovery method designed to improve sweep efficiency during CO 2 injection with the injected water to control the mobility of CO 2 and to stabilize the gas front. Optimization of CO 2 -WAG injection is widely regarded as a viable technique for controlling the CO 2 and oil miscible process. Poor recovery from CO 2 -WAG injection can be caused by inappropriately designed WAG parameters. In previous study (Chen and Reynolds, 2016), we proposed an algorithm to optimize the well controls which maximize the life-cycle net-present-value (NPV). However,more » the effect of injection half-cycle lengths for each injector on oil recovery or NPV has not been well investigated. In this paper, an optimization framework based on augmented Lagrangian method and the newly developed stochastic-simplex-approximate-gradient (StoSAG) algorithm is proposed to explore the possibility of simultaneous optimization of the WAG half-cycle lengths together with the well controls. Finally, the proposed framework is demonstrated with three reservoir examples.« less

  18. Spline approximations for nonlinear hereditary control systems

    NASA Technical Reports Server (NTRS)

    Daniel, P. L.

    1982-01-01

    A sline-based approximation scheme is discussed for optimal control problems governed by nonlinear nonautonomous delay differential equations. The approximating framework reduces the original control problem to a sequence of optimization problems governed by ordinary differential equations. Convergence proofs, which appeal directly to dissipative-type estimates for the underlying nonlinear operator, are given and numerical findings are summarized.

  19. Efficient computation of optimal actions.

    PubMed

    Todorov, Emanuel

    2009-07-14

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.

  20. Discrete-time entropy formulation of optimal and adaptive control problems

    NASA Technical Reports Server (NTRS)

    Tsai, Yweting A.; Casiello, Francisco A.; Loparo, Kenneth A.

    1992-01-01

    The discrete-time version of the entropy formulation of optimal control of problems developed by G. N. Saridis (1988) is discussed. Given a dynamical system, the uncertainty in the selection of the control is characterized by the probability distribution (density) function which maximizes the total entropy. The equivalence between the optimal control problem and the optimal entropy problem is established, and the total entropy is decomposed into a term associated with the certainty equivalent control law, the entropy of estimation, and the so-called equivocation of the active transmission of information from the controller to the estimator. This provides a useful framework for studying the certainty equivalent and adaptive control laws.

  1. Real-Time Load-Side Control of Electric Power Systems

    NASA Astrophysics Data System (ADS)

    Zhao, Changhong

    Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems. (1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control. (2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.

  2. [Regulation framework of watershed landscape pattern for non-point source pollution control based on 'source-sink' theory: A case study in the watershed of Maluan Bay, Xiamen City, China].

    PubMed

    Huang, Ning; Wang, Hong Ying; Lin, Tao; Liu, Qi Ming; Huang, Yun Feng; Li, Jian Xiong

    2016-10-01

    Watershed landscape pattern regulation and optimization based on 'source-sink' theory for non-point source pollution control is a cost-effective measure and still in the exploratory stage. Taking whole watershed as the research object, on the basis of landscape ecology, related theories and existing research results, a regulation framework of watershed landscape pattern for non-point source pollution control was developed at two levels based on 'source-sink' theory in this study: 1) at watershed level: reasonable basic combination and spatial pattern of 'source-sink' landscape was analyzed, and then holistic regulation and optimization method of landscape pattern was constructed; 2) at landscape patch level: key 'source' landscape was taken as the focus of regulation and optimization. Firstly, four identification criteria of key 'source' landscape including landscape pollutant loading per unit area, landscape slope, long and narrow transfer 'source' landscape, pollutant loading per unit length of 'source' landscape along the riverbank were developed. Secondly, nine types of regulation and optimization methods for different key 'source' landscape in rural and urban areas were established, according to three regulation and optimization rules including 'sink' landscape inlay, banding 'sink' landscape supplement, pollutants capacity of original 'sink' landscape enhancement. Finally, the regulation framework was applied for the watershed of Maluan Bay in Xiamen City. Holistic regulation and optimization mode of watershed landscape pattern of Maluan Bay and key 'source' landscape regulation and optimization measures for the three zones were made, based on GIS technology, remote sensing images and DEM model.

  3. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  4. A Multiobjective Optimization Framework for Online Stochastic Optimal Control in Hybrid Electric Vehicles

    DOE PAGES

    Malikopoulos, Andreas

    2015-01-01

    The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.more » Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.« less

  5. Hybrid Optimization in Urban Traffic Networks

    DOT National Transportation Integrated Search

    1979-04-01

    The hybrid optimization problem is formulated to provide a general theoretical framework for the analysis of a class of traffic control problems which takes into account the role of individual drivers as independent decisionmakers. Different behavior...

  6. Encoder-Decoder Optimization for Brain-Computer Interfaces

    PubMed Central

    Merel, Josh; Pianto, Donald M.; Cunningham, John P.; Paninski, Liam

    2015-01-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages. PMID:26029919

  7. Encoder-decoder optimization for brain-computer interfaces.

    PubMed

    Merel, Josh; Pianto, Donald M; Cunningham, John P; Paninski, Liam

    2015-06-01

    Neuroprosthetic brain-computer interfaces are systems that decode neural activity into useful control signals for effectors, such as a cursor on a computer screen. It has long been recognized that both the user and decoding system can adapt to increase the accuracy of the end effector. Co-adaptation is the process whereby a user learns to control the system in conjunction with the decoder adapting to learn the user's neural patterns. We provide a mathematical framework for co-adaptation and relate co-adaptation to the joint optimization of the user's control scheme ("encoding model") and the decoding algorithm's parameters. When the assumptions of that framework are respected, co-adaptation cannot yield better performance than that obtainable by an optimal initial choice of fixed decoder, coupled with optimal user learning. For a specific case, we provide numerical methods to obtain such an optimized decoder. We demonstrate our approach in a model brain-computer interface system using an online prosthesis simulator, a simple human-in-the-loop pyschophysics setup which provides a non-invasive simulation of the BCI setting. These experiments support two claims: that users can learn encoders matched to fixed, optimal decoders and that, once learned, our approach yields expected performance advantages.

  8. Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.

    2012-01-01

    This paper presents a new model-reference adaptive control method based on a bi-objective optimal control formulation for systems with input uncertainty. A parallel predictor model is constructed to relate the predictor error to the estimation error of the control effectiveness matrix. In this work, we develop an optimal control modification adaptive control approach that seeks to minimize a bi-objective linear quadratic cost function of both the tracking error norm and predictor error norm simultaneously. The resulting adaptive laws for the parametric uncertainty and control effectiveness uncertainty are dependent on both the tracking error and predictor error, while the adaptive laws for the feedback gain and command feedforward gain are only dependent on the tracking error. The optimal control modification term provides robustness to the adaptive laws naturally from the optimal control framework. Simulations demonstrate the effectiveness of the proposed adaptive control approach.

  9. Identification of optimal feedback control rules from micro-quadrotor and insect flight trajectories.

    PubMed

    Faruque, Imraan A; Muijres, Florian T; Macfarlane, Kenneth M; Kehlenbeck, Andrew; Humbert, J Sean

    2018-06-01

    This paper presents "optimal identification," a framework for using experimental data to identify the optimality conditions associated with the feedback control law implemented in the measurements. The technique compares closed loop trajectory measurements against a reduced order model of the open loop dynamics, and uses linear matrix inequalities to solve an inverse optimal control problem as a convex optimization that estimates the controller optimality conditions. In this study, the optimal identification technique is applied to two examples, that of a millimeter-scale micro-quadrotor with an engineered controller on board, and the example of a population of freely flying Drosophila hydei maneuvering about forward flight. The micro-quadrotor results show that the performance indices used to design an optimal flight control law for a micro-quadrotor may be recovered from the closed loop simulated flight trajectories, and the Drosophila results indicate that the combined effect of the insect longitudinal flight control sensing and feedback acts principally to regulate pitch rate.

  10. Optimal Regulation of Virtual Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall Anese, Emiliano; Guggilam, Swaroop S.; Simonetto, Andrea

    This paper develops a real-time algorithmic framework for aggregations of distributed energy resources (DERs) in distribution networks to provide regulation services in response to transmission-level requests. Leveraging online primal-dual-type methods for time-varying optimization problems and suitable linearizations of the nonlinear AC power-flow equations, we believe this work establishes the system-theoretic foundation to realize the vision of distribution-level virtual power plants. The optimization framework controls the output powers of dispatchable DERs such that, in aggregate, they respond to automatic-generation-control and/or regulation-services commands. This is achieved while concurrently regulating voltages within the feeder and maximizing customers' and utility's performance objectives. Convergence andmore » tracking capabilities are analytically established under suitable modeling assumptions. Simulations are provided to validate the proposed approach.« less

  11. Common foundations of optimal control across the sciences: evidence of a free lunch

    NASA Astrophysics Data System (ADS)

    Russell, Benjamin; Rabitz, Herschel

    2017-03-01

    A common goal in the sciences is optimization of an objective function by selecting control variables such that a desired outcome is achieved. This scenario can be expressed in terms of a control landscape of an objective considered as a function of the control variables. At the most basic level, it is known that the vast majority of quantum control landscapes possess no traps, whose presence would hinder reaching the objective. This paper reviews and extends the quantum control landscape assessment, presenting evidence that the same highly favourable landscape features exist in many other domains of science. The implications of this broader evidence are discussed. Specifically, control landscape examples from quantum mechanics, chemistry and evolutionary biology are presented. Despite the obvious differences, commonalities between these areas are highlighted within a unified mathematical framework. This mathematical framework is driven by the wide-ranging experimental evidence on the ease of finding optimal controls (in terms of the required algorithmic search effort beyond the laboratory set-up overhead). The full scope and implications of this observed common control behaviour pose an open question for assessment in further work. This article is part of the themed issue 'Horizons of cybernetical physics'.

  12. Statistical estimation via convex optimization for trending and performance monitoring

    NASA Astrophysics Data System (ADS)

    Samar, Sikandar

    This thesis presents an optimization-based statistical estimation approach to find unknown trends in noisy data. A Bayesian framework is used to explicitly take into account prior information about the trends via trend models and constraints. The main focus is on convex formulation of the Bayesian estimation problem, which allows efficient computation of (globally) optimal estimates. There are two main parts of this thesis. The first part formulates trend estimation in systems described by known detailed models as a convex optimization problem. Statistically optimal estimates are then obtained by maximizing a concave log-likelihood function subject to convex constraints. We consider the problem of increasing problem dimension as more measurements become available, and introduce a moving horizon framework to enable recursive estimation of the unknown trend by solving a fixed size convex optimization problem at each horizon. We also present a distributed estimation framework, based on the dual decomposition method, for a system formed by a network of complex sensors with local (convex) estimation. Two specific applications of the convex optimization-based Bayesian estimation approach are described in the second part of the thesis. Batch estimation for parametric diagnostics in a flight control simulation of a space launch vehicle is shown to detect incipient fault trends despite the natural masking properties of feedback in the guidance and control loops. Moving horizon approach is used to estimate time varying fault parameters in a detailed nonlinear simulation model of an unmanned aerial vehicle. An excellent performance is demonstrated in the presence of winds and turbulence.

  13. Optimality, stochasticity, and variability in motor behavior

    PubMed Central

    Guigon, Emmanuel; Baraduc, Pierre; Desmurget, Michel

    2008-01-01

    Recent theories of motor control have proposed that the nervous system acts as a stochastically optimal controller, i.e. it plans and executes motor behaviors taking into account the nature and statistics of noise. Detrimental effects of noise are converted into a principled way of controlling movements. Attractive aspects of such theories are their ability to explain not only characteristic features of single motor acts, but also statistical properties of repeated actions. Here, we present a critical analysis of stochastic optimality in motor control which reveals several difficulties with this hypothesis. We show that stochastic control may not be necessary to explain the stochastic nature of motor behavior, and we propose an alternative framework, based on the action of a deterministic controller coupled with an optimal state estimator, which relieves drawbacks of stochastic optimality and appropriately explains movement variability. PMID:18202922

  14. Modeling and Advanced Control for Sustainable Process Systems

    EPA Science Inventory

    This book chapter introduces a novel process systems engineering framework that integrates process control with sustainability assessment tools for the simultaneous evaluation and optimization of process operations. The implemented control strategy consists of a biologically-insp...

  15. Optimal control of underactuated mechanical systems: A geometric approach

    NASA Astrophysics Data System (ADS)

    Colombo, Leonardo; Martín De Diego, David; Zuccalli, Marcela

    2010-08-01

    In this paper, we consider a geometric formalism for optimal control of underactuated mechanical systems. Our techniques are an adaptation of the classical Skinner and Rusk approach for the case of Lagrangian dynamics with higher-order constraints. We study a regular case where it is possible to establish a symplectic framework and, as a consequence, to obtain a unique vector field determining the dynamics of the optimal control problem. These developments will allow us to develop a new class of geometric integrators based on discrete variational calculus.

  16. Evolutionary game based control for biological systems with applications in drug delivery.

    PubMed

    Li, Xiaobo; Lenaghan, Scott C; Zhang, Mingjun

    2013-06-07

    Control engineering and analysis of biological systems have become increasingly important for systems and synthetic biology. Unfortunately, no widely accepted control framework is currently available for these systems, especially at the cell and molecular levels. This is partially due to the lack of appropriate mathematical models to describe the unique dynamics of biological systems, and the lack of implementation techniques, such as ultra-fast and ultra-small devices and corresponding control algorithms. This paper proposes a control framework for biological systems subject to dynamics that exhibit adaptive behavior under evolutionary pressures. The control framework was formulated based on evolutionary game based modeling, which integrates both the internal dynamics and the population dynamics. In the proposed control framework, the adaptive behavior was characterized as an internal dynamic, and the external environment was regarded as an external control input. The proposed open-interface control framework can be integrated with additional control algorithms for control of biological systems. To demonstrate the effectiveness of the proposed framework, an optimal control strategy was developed and validated for drug delivery using the pathogen Giardia lamblia as a test case. In principle, the proposed control framework can be applied to any biological system exhibiting adaptive behavior under evolutionary pressures. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. A roadmap for optimal control: the right way to commute.

    PubMed

    Ross, I Michael

    2005-12-01

    Optimal control theory is the foundation for many problems in astrodynamics. Typical examples are trajectory design and optimization, relative motion control of distributed space systems and attitude steering. Many such problems in astrodynamics are solved by an alternative route of mathematical analysis and deep physical insight, in part because of the perception that an optimal control framework generates hard problems. Although this is indeed true of the Bellman and Pontryagin frameworks, the covector mapping principle provides a neoclassical approach that renders hard problems easy. That is, although the origins of this philosophy can be traced back to Bernoulli and Euler, it is essentially modern as a result of the strong linkage between approximation theory, set-valued analysis and computing technology. Motivated by the broad success of this approach, mission planners are now conceiving and demanding higher performance from space systems. This has resulted in new set of theoretical and computational problems. Recently, under the leadership of NASA-GRC, several workshops were held to address some of these problems. This paper outlines the theoretical issues stemming from practical problems in astrodynamics. Emphasis is placed on how it pertains to advanced mission design problems.

  18. Topology Optimization using the Level Set and eXtended Finite Element Methods: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Villanueva Perez, Carlos Hernan

    Computational design optimization provides designers with automated techniques to develop novel and non-intuitive optimal designs. Topology optimization is a design optimization technique that allows for the evolution of a broad variety of geometries in the optimization process. Traditional density-based topology optimization methods often lack a sufficient resolution of the geometry and physical response, which prevents direct use of the optimized design in manufacturing and the accurate modeling of the physical response of boundary conditions. The goal of this thesis is to introduce a unified topology optimization framework that uses the Level Set Method (LSM) to describe the design geometry and the eXtended Finite Element Method (XFEM) to solve the governing equations and measure the performance of the design. The methodology is presented as an alternative to density-based optimization approaches, and is able to accommodate a broad range of engineering design problems. The framework presents state-of-the-art methods for immersed boundary techniques to stabilize the systems of equations and enforce the boundary conditions, and is studied with applications in 2D and 3D linear elastic structures, incompressible flow, and energy and species transport problems to test the robustness and the characteristics of the method. A comparison of the framework against density-based topology optimization approaches is studied with regards to convergence, performance, and the capability to manufacture the designs. Furthermore, the ability to control the shape of the design to operate within manufacturing constraints is developed and studied. The analysis capability of the framework is validated quantitatively through comparison against previous benchmark studies, and qualitatively through its application to topology optimization problems. The design optimization problems converge to intuitive designs and resembled well the results from previous 2D or density-based studies.

  19. CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.

  20. Effect of interaction strength on robustness of controlling edge dynamics in complex networks

    NASA Astrophysics Data System (ADS)

    Pang, Shao-Peng; Hao, Fei

    2018-05-01

    Robustness plays a critical role in the controllability of complex networks to withstand failures and perturbations. Recent advances in the edge controllability show that the interaction strength among edges plays a more important role than network structure. Therefore, we focus on the effect of interaction strength on the robustness of edge controllability. Using three categories of all edges to quantify the robustness, we develop a universal framework to evaluate and analyze the robustness in complex networks with arbitrary structures and interaction strengths. Applying our framework to a large number of model and real-world networks, we find that the interaction strength is a dominant factor for the robustness in undirected networks. Meanwhile, the strongest robustness and the optimal edge controllability in undirected networks can be achieved simultaneously. Different from the case of undirected networks, the robustness in directed networks is determined jointly by the interaction strength and the network's degree distribution. Moreover, a stronger robustness is usually associated with a larger number of driver nodes required to maintain full control in directed networks. This prompts us to provide an optimization method by adjusting the interaction strength to optimize the robustness of edge controllability.

  1. Application of controller partitioning optimization procedure to integrated flight/propulsion control design for a STOVL aircraft

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Schmidt, Phillip H.

    1993-01-01

    A parameter optimization framework has earlier been developed to solve the problem of partitioning a centralized controller into a decentralized, hierarchical structure suitable for integrated flight/propulsion control implementation. This paper presents results from the application of the controller partitioning optimization procedure to IFPC design for a Short Take-Off and Vertical Landing (STOVL) aircraft in transition flight. The controller partitioning problem and the parameter optimization algorithm are briefly described. Insight is provided into choosing various 'user' selected parameters in the optimization cost function such that the resulting optimized subcontrollers will meet the characteristics of the centralized controller that are crucial to achieving the desired closed-loop performance and robustness, while maintaining the desired subcontroller structure constraints that are crucial for IFPC implementation. The optimization procedure is shown to improve upon the initial partitioned subcontrollers and lead to performance comparable to that achieved with the centralized controller. This application also provides insight into the issues that should be addressed at the centralized control design level in order to obtain implementable partitioned subcontrollers.

  2. Global dynamic optimization approach to predict activation in metabolic pathways.

    PubMed

    de Hijas-Liste, Gundián M; Klipp, Edda; Balsa-Canto, Eva; Banga, Julio R

    2014-01-06

    During the last decade, a number of authors have shown that the genetic regulation of metabolic networks may follow optimality principles. Optimal control theory has been successfully used to compute optimal enzyme profiles considering simple metabolic pathways. However, applying this optimal control framework to more general networks (e.g. branched networks, or networks incorporating enzyme production dynamics) yields problems that are analytically intractable and/or numerically very challenging. Further, these previous studies have only considered a single-objective framework. In this work we consider a more general multi-objective formulation and we present solutions based on recent developments in global dynamic optimization techniques. We illustrate the performance and capabilities of these techniques considering two sets of problems. First, we consider a set of single-objective examples of increasing complexity taken from the recent literature. We analyze the multimodal character of the associated non linear optimization problems, and we also evaluate different global optimization approaches in terms of numerical robustness, efficiency and scalability. Second, we consider generalized multi-objective formulations for several examples, and we show how this framework results in more biologically meaningful results. The proposed strategy was used to solve a set of single-objective case studies related to unbranched and branched metabolic networks of different levels of complexity. All problems were successfully solved in reasonable computation times with our global dynamic optimization approach, reaching solutions which were comparable or better than those reported in previous literature. Further, we considered, for the first time, multi-objective formulations, illustrating how activation in metabolic pathways can be explained in terms of the best trade-offs between conflicting objectives. This new methodology can be applied to metabolic networks with arbitrary topologies, non-linear dynamics and constraints.

  3. Optimization of the Controlled Evaluation of Closed Relational Queries

    NASA Astrophysics Data System (ADS)

    Biskup, Joachim; Lochner, Jan-Hendrik; Sonntag, Sebastian

    For relational databases, controlled query evaluation is an effective inference control mechanism preserving confidentiality regarding a previously declared confidentiality policy. Implementations of controlled query evaluation usually lack efficiency due to costly theorem prover calls. Suitably constrained controlled query evaluation can be implemented efficiently, but is not flexible enough from the perspective of database users and security administrators. In this paper, we propose an optimized framework for controlled query evaluation in relational databases, being efficiently implementable on the one hand and relaxing the constraints of previous approaches on the other hand.

  4. Nonlinear and non-Gaussian Bayesian based handwriting beautification

    NASA Astrophysics Data System (ADS)

    Shi, Cao; Xiao, Jianguo; Xu, Canhui; Jia, Wenhua

    2013-03-01

    A framework is proposed in this paper to effectively and efficiently beautify handwriting by means of a novel nonlinear and non-Gaussian Bayesian algorithm. In the proposed framework, format and size of handwriting image are firstly normalized, and then typeface in computer system is applied to optimize vision effect of handwriting. The Bayesian statistics is exploited to characterize the handwriting beautification process as a Bayesian dynamic model. The model parameters to translate, rotate and scale typeface in computer system are controlled by state equation, and the matching optimization between handwriting and transformed typeface is employed by measurement equation. Finally, the new typeface, which is transformed from the original one and gains the best nonlinear and non-Gaussian optimization, is the beautification result of handwriting. Experimental results demonstrate the proposed framework provides a creative handwriting beautification methodology to improve visual acceptance.

  5. Development of Chemical Process Design and Control for ...

    EPA Pesticide Factsheets

    This contribution describes a novel process systems engineering framework that couples advanced control with sustainability evaluation and decision making for the optimization of process operations to minimize environmental impacts associated with products, materials, and energy. The implemented control strategy combines a biologically inspired method with optimal control concepts for finding more sustainable operating trajectories. The sustainability assessment of process operating points is carried out by using the U.S. E.P.A.’s Gauging Reaction Effectiveness for the ENvironmental Sustainability of Chemistries with a multi-Objective Process Evaluator (GREENSCOPE) tool that provides scores for the selected indicators in the economic, material efficiency, environmental and energy areas. The indicator scores describe process performance on a sustainability measurement scale, effectively determining which operating point is more sustainable if there are more than several steady states for one specific product manufacturing. Through comparisons between a representative benchmark and the optimal steady-states obtained through implementation of the proposed controller, a systematic decision can be made in terms of whether the implementation of the controller is moving the process towards a more sustainable operation. The effectiveness of the proposed framework is illustrated through a case study of a continuous fermentation process for fuel production, whose materi

  6. Dynamic optimization and adaptive controller design

    NASA Astrophysics Data System (ADS)

    Inamdar, S. R.

    2010-10-01

    In this work I present a new type of controller which is an adaptive tracking controller which employs dynamic optimization for optimizing current value of controller action for the temperature control of nonisothermal continuously stirred tank reactor (CSTR). We begin with a two-state model of nonisothermal CSTR which are mass and heat balance equations and then add cooling system dynamics to eliminate input multiplicity. The initial design value is obtained using local stability of steady states where approach temperature for cooling action is specified as a steady state and a design specification. Later we make a correction in the dynamics where material balance is manipulated to use feed concentration as a system parameter as an adaptive control measure in order to avoid actuator saturation for the main control loop. The analysis leading to design of dynamic optimization based parameter adaptive controller is presented. The important component of this mathematical framework is reference trajectory generation to form an adaptive control measure.

  7. Optimal intervention strategies for cholera outbreak by education and chlorination

    NASA Astrophysics Data System (ADS)

    Bakhtiar, Toni

    2016-01-01

    This paper discusses the control of infectious diseases in the framework of optimal control approach. A case study on cholera control was studied by considering two control strategies, namely education and chlorination. We distinct the former control into one regarding person-to-person behaviour and another one concerning person-to-environment conduct. Model are divided into two interacted populations: human population which follows an SIR model and pathogen population. Pontryagin maximum principle was applied in deriving a set of differential equations which consists of dynamical and adjoin systems as optimality conditions. Then, the fourth order Runge-Kutta method was exploited to numerically solve the equation system. An illustrative example was provided to assess the effectiveness of the control strategies toward a set of control scenarios.

  8. A framework for quantifying and optimizing the value of seismic monitoring of infrastructure

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr

    2017-04-01

    This paper outlines a framework for quantifying and optimizing the value of information from structural health monitoring (SHM) technology deployed on large infrastructure, which may sustain damage in a series of earthquakes (the main and the aftershocks). The evolution of the damage state of the infrastructure without or with SHM is presented as a time-dependent, stochastic, discrete-state, observable and controllable nonlinear dynamical system. The pre-posterior Bayesian analysis and the decision tree are used for quantifying and optimizing the value of SHM information. An optimality problem is then formulated how to decide on the adoption of SHM and how to manage optimally the usage and operations of the possibly damaged infrastructure and its repair schedule using the information from SHM. The objective function to minimize is the expected total cost or risk.

  9. Topology-Optimized Multilayered Metaoptics

    NASA Astrophysics Data System (ADS)

    Lin, Zin; Groever, Benedikt; Capasso, Federico; Rodriguez, Alejandro W.; Lončar, Marko

    2018-04-01

    We propose a general topology-optimization framework for metasurface inverse design that can automatically discover highly complex multilayered metastructures with increased functionalities. In particular, we present topology-optimized multilayered geometries exhibiting angular phase control, including a single-piece nanophotonic metalens with angular aberration correction, as well as an angle-convergent metalens that focuses light onto the same focal spot regardless of the angle of incidence.

  10. Dynamics and control of DNA sequence amplification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marimuthu, Karthikeyan; Chakrabarti, Raj, E-mail: raj@pmc-group.com, E-mail: rajc@andrew.cmu.edu; Division of Fundamental Research, PMC Advanced Technology, Mount Laurel, New Jersey 08054

    2014-10-28

    DNA amplification is the process of replication of a specified DNA sequence in vitro through time-dependent manipulation of its external environment. A theoretical framework for determination of the optimal dynamic operating conditions of DNA amplification reactions, for any specified amplification objective, is presented based on first-principles biophysical modeling and control theory. Amplification of DNA is formulated as a problem in control theory with optimal solutions that can differ considerably from strategies typically used in practice. Using the Polymerase Chain Reaction as an example, sequence-dependent biophysical models for DNA amplification are cast as control systems, wherein the dynamics of the reactionmore » are controlled by a manipulated input variable. Using these control systems, we demonstrate that there exists an optimal temperature cycling strategy for geometric amplification of any DNA sequence and formulate optimal control problems that can be used to derive the optimal temperature profile. Strategies for the optimal synthesis of the DNA amplification control trajectory are proposed. Analogous methods can be used to formulate control problems for more advanced amplification objectives corresponding to the design of new types of DNA amplification reactions.« less

  11. Mixed H(2)/H(sub infinity): Control with output feedback compensators using parameter optimization

    NASA Technical Reports Server (NTRS)

    Schoemig, Ewald; Ly, Uy-Loi

    1992-01-01

    Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.

  12. Mixed H2/H(infinity)-Control with an output-feedback compensator using parameter optimization

    NASA Technical Reports Server (NTRS)

    Schoemig, Ewald; Ly, Uy-Loi

    1992-01-01

    Among the many possible norm-based optimization methods, the concept of H-infinity optimal control has gained enormous attention in the past few years. Here the H-infinity framework, based on the Small Gain Theorem and the Youla Parameterization, effectively treats system uncertainties in the control law synthesis. A design approach involving a mixed H(sub 2)/H-infinity norm strives to combine the advantages of both methods. This advantage motivates researchers toward finding solutions to the mixed H(sub 2)/H-infinity control problem. The approach developed in this research is based on a finite time cost functional that depicts an H-infinity bound control problem in a H(sub 2)-optimization setting. The goal is to define a time-domain cost function that optimizes the H(sub 2)-norm of a system with an H-infinity-constraint function.

  13. Adaptive critic designs for optimal control of uncertain nonlinear systems with unmatched interconnections.

    PubMed

    Yang, Xiong; He, Haibo

    2018-05-26

    In this paper, we develop a novel optimal control strategy for a class of uncertain nonlinear systems with unmatched interconnections. To begin with, we present a stabilizing feedback controller for the interconnected nonlinear systems by modifying an array of optimal control laws of auxiliary subsystems. We also prove that this feedback controller ensures a specified cost function to achieve optimality. Then, under the framework of adaptive critic designs, we use critic networks to solve the Hamilton-Jacobi-Bellman equations associated with auxiliary subsystem optimal control laws. The critic network weights are tuned through the gradient descent method combined with an additional stabilizing term. By using the newly established weight tuning rules, we no longer need the initial admissible control condition. In addition, we demonstrate that all signals in the closed-loop auxiliary subsystems are stable in the sense of uniform ultimate boundedness by using classic Lyapunov techniques. Finally, we provide an interconnected nonlinear plant to validate the present control scheme. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. A predictive control framework for optimal energy extraction of wind farms

    NASA Astrophysics Data System (ADS)

    Vali, M.; van Wingerden, J. W.; Boersma, S.; Petrović, V.; Kühn, M.

    2016-09-01

    This paper proposes an adjoint-based model predictive control for optimal energy extraction of wind farms. It employs the axial induction factor of wind turbines to influence their aerodynamic interactions through the wake. The performance index is defined here as the total power production of the wind farm over a finite prediction horizon. A medium-fidelity wind farm model is utilized to predict the inflow propagation in advance. The adjoint method is employed to solve the formulated optimization problem in a cost effective way and the first part of the optimal solution is implemented over the control horizon. This procedure is repeated at the next controller sample time providing the feedback into the optimization. The effectiveness and some key features of the proposed approach are studied for a two turbine test case through simulations.

  15. A Framework for Modeling Emerging Diseases to Inform Management

    PubMed Central

    Katz, Rachel A.; Richgels, Katherine L.D.; Walsh, Daniel P.; Grant, Evan H.C.

    2017-01-01

    The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge. PMID:27983501

  16. A Framework for Modeling Emerging Diseases to Inform Management.

    PubMed

    Russell, Robin E; Katz, Rachel A; Richgels, Katherine L D; Walsh, Daniel P; Grant, Evan H C

    2017-01-01

    The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.

  17. A framework for modeling emerging diseases to inform management

    USGS Publications Warehouse

    Russell, Robin E.; Katz, Rachel A.; Richgels, Katherine L. D.; Walsh, Daniel P.; Grant, Evan H. Campbell

    2017-01-01

    The rapid emergence and reemergence of zoonotic diseases requires the ability to rapidly evaluate and implement optimal management decisions. Actions to control or mitigate the effects of emerging pathogens are commonly delayed because of uncertainty in the estimates and the predicted outcomes of the control tactics. The development of models that describe the best-known information regarding the disease system at the early stages of disease emergence is an essential step for optimal decision-making. Models can predict the potential effects of the pathogen, provide guidance for assessing the likelihood of success of different proposed management actions, quantify the uncertainty surrounding the choice of the optimal decision, and highlight critical areas for immediate research. We demonstrate how to develop models that can be used as a part of a decision-making framework to determine the likelihood of success of different management actions given current knowledge.

  18. Development of Chemical Process Design and Control for Sustainability

    EPA Science Inventory

    This contribution describes a novel process systems engineering framework that couples advanced control with sustainability evaluation and decision making for the optimization of process operations to minimize environmental impacts associated with products, materials, and energy....

  19. Integrated structure/control law design by multilevel optimization

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.; Schmidt, David K.

    1989-01-01

    A new approach to integrated structure/control law design based on multilevel optimization is presented. This new approach is applicable to aircraft and spacecraft and allows for the independent design of the structure and control law. Integration of the designs is achieved through use of an upper level coordination problem formulation within the multilevel optimization framework. The method requires the use of structure and control law design sensitivity information. A general multilevel structure/control law design problem formulation is given, and the use of Linear Quadratic Gaussian (LQG) control law design and design sensitivity methods within the formulation is illustrated. Results of three simple integrated structure/control law design examples are presented. These results show the capability of structure and control law design tradeoffs to improve controlled system performance within the multilevel approach.

  20. Design of Optimally Robust Control Systems.

    DTIC Science & Technology

    1980-01-01

    approach is that the optimization framework is an artificial device. While some design constraints can easily be incorporated into a single cost function...indicating that that point was indeed the solution. Also, an intellegent initial guess for k was important in order to avoid being hung up at the double

  1. A parameter optimization approach to controller partitioning for integrated flight/propulsion control application

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip; Garg, Sanjay; Holowecky, Brian

    1992-01-01

    A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.

  2. A parameter optimization approach to controller partitioning for integrated flight/propulsion control application

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip H.; Garg, Sanjay; Holowecky, Brian R.

    1993-01-01

    A parameter optimization framework is presented to solve the problem of partitioning a centralized controller into a decentralized hierarchical structure suitable for integrated flight/propulsion control implementation. The controller partitioning problem is briefly discussed and a cost function to be minimized is formulated, such that the resulting 'optimal' partitioned subsystem controllers will closely match the performance (including robustness) properties of the closed-loop system with the centralized controller while maintaining the desired controller partitioning structure. The cost function is written in terms of parameters in a state-space representation of the partitioned sub-controllers. Analytical expressions are obtained for the gradient of this cost function with respect to parameters, and an optimization algorithm is developed using modern computer-aided control design and analysis software. The capabilities of the algorithm are demonstrated by application to partitioned integrated flight/propulsion control design for a modern fighter aircraft in the short approach to landing task. The partitioning optimization is shown to lead to reduced-order subcontrollers that match the closed-loop command tracking and decoupling performance achieved by a high-order centralized controller.

  3. Optimal and robust control of quantum state transfer by shaping the spectral phase of ultrafast laser pulses.

    PubMed

    Guo, Yu; Dong, Daoyi; Shu, Chuan-Cun

    2018-04-04

    Achieving fast and efficient quantum state transfer is a fundamental task in physics, chemistry and quantum information science. However, the successful implementation of the perfect quantum state transfer also requires robustness under practically inevitable perturbative defects. Here, we demonstrate how an optimal and robust quantum state transfer can be achieved by shaping the spectral phase of an ultrafast laser pulse in the framework of frequency domain quantum optimal control theory. Our numerical simulations of the single dibenzoterrylene molecule as well as in atomic rubidium show that optimal and robust quantum state transfer via spectral phase modulated laser pulses can be achieved by incorporating a filtering function of the frequency into the optimization algorithm, which in turn has potential applications for ultrafast robust control of photochemical reactions.

  4. Optimization of Driving Styles for Fuel Economy Improvement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malikopoulos, Andreas; Aguilar, Juan P.

    2012-01-01

    Modern vehicles have sophisticated electronic control units, particularly to control engine operation with respect to a balance between fuel economy, emissions, and power. These control units are designed for specific driving conditions and testing. However, each individual driving style is different and rarely meets those driving conditions. In the research reported here we investigate those driving style factors that have a major impact on fuel economy. An optimization framework is proposed with the aim of optimizing driving styles with respect to these driving factors. A set of polynomial metamodels are constructed to reflect the responses produced by changes of themore » driving factors. Then we compare the optimized driving styles to the original ones and evaluate the efficiency and effectiveness of the optimization formulation.« less

  5. Estimate the effective connectivity in multi-coupled neural mass model using particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Shan, Bonan; Wang, Jiang; Deng, Bin; Zhang, Zhen; Wei, Xile

    2017-03-01

    Assessment of the effective connectivity among different brain regions during seizure is a crucial problem in neuroscience today. As a consequence, a new model inversion framework of brain function imaging is introduced in this manuscript. This framework is based on approximating brain networks using a multi-coupled neural mass model (NMM). NMM describes the excitatory and inhibitory neural interactions, capturing the mechanisms involved in seizure initiation, evolution and termination. Particle swarm optimization method is used to estimate the effective connectivity variation (the parameters of NMM) and the epileptiform dynamics (the states of NMM) that cannot be directly measured using electrophysiological measurement alone. The estimated effective connectivity includes both the local connectivity parameters within a single region NMM and the remote connectivity parameters between multi-coupled NMMs. When the epileptiform activities are estimated, a proportional-integral controller outputs control signal so that the epileptiform spikes can be inhibited immediately. Numerical simulations are carried out to illustrate the effectiveness of the proposed framework. The framework and the results have a profound impact on the way we detect and treat epilepsy.

  6. Optimal Power Flow Pursuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea

    This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of AC optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall,more » the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.« less

  7. Optimization-based power management of hybrid power systems with applications in advanced hybrid electric vehicles and wind farms with battery storage

    NASA Astrophysics Data System (ADS)

    Borhan, Hoseinali

    Modern hybrid electric vehicles and many stationary renewable power generation systems combine multiple power generating and energy storage devices to achieve an overall system-level efficiency and flexibility which is higher than their individual components. The power or energy management control, "brain" of these "hybrid" systems, determines adaptively and based on the power demand the power split between multiple subsystems and plays a critical role in overall system-level efficiency. This dissertation proposes that a receding horizon optimal control (aka Model Predictive Control) approach can be a natural and systematic framework for formulating this type of power management controls. More importantly the dissertation develops new results based on the classical theory of optimal control that allow solving the resulting optimal control problem in real-time, in spite of the complexities that arise due to several system nonlinearities and constraints. The dissertation focus is on two classes of hybrid systems: hybrid electric vehicles in the first part and wind farms with battery storage in the second part. The first part of the dissertation proposes and fully develops a real-time optimization-based power management strategy for hybrid electric vehicles. Current industry practice uses rule-based control techniques with "else-then-if" logic and look-up maps and tables in the power management of production hybrid vehicles. These algorithms are not guaranteed to result in the best possible fuel economy and there exists a gap between their performance and a minimum possible fuel economy benchmark. Furthermore, considerable time and effort are spent calibrating the control system in the vehicle development phase, and there is little flexibility in real-time handling of constraints and re-optimization of the system operation in the event of changing operating conditions and varying parameters. In addition, a proliferation of different powertrain configurations may result in the need for repeated control system redesign. To address these shortcomings, we formulate the power management problem as a nonlinear and constrained optimal control problem. Solution of this optimal control problem in real-time on chronometric- and memory-constrained automotive microcontrollers is quite challenging; this computational complexity is due to the highly nonlinear dynamics of the powertrain subsystems, mixed-integer switching modes of their operation, and time-varying and nonlinear hard constraints that system variables should satisfy. The main contribution of the first part of the dissertation is that it establishes methods for systematic and step-by step improvements in fuel economy while maintaining the algorithmic computational requirements in a real-time implementable framework. More specifically a linear time-varying model predictive control approach is employed first which uses sequential quadratic programming to find sub-optimal solutions to the power management problem. Next the objective function is further refined and broken into a short and a long horizon segments; the latter approximated as a function of the state using the connection between the Pontryagin minimum principle and Hamilton-Jacobi-Bellman equations. The power management problem is then solved using a nonlinear MPC framework with a dynamic programming solver and the fuel economy is further improved. Typical simplifying academic assumptions are minimal throughout this work, thanks to close collaboration with research scientists at Ford research labs and their stringent requirement that the proposed solutions be tested on high-fidelity production models. Simulation results on a high-fidelity model of a hybrid electric vehicle over multiple standard driving cycles reveal the potential for substantial fuel economy gains. To address the control calibration challenges, we also present a novel and fast calibration technique utilizing parallel computing techniques. ^ The second part of this dissertation presents an optimization-based control strategy for the power management of a wind farm with battery storage. The strategy seeks to minimize the error between the power delivered by the wind farm with battery storage and the power demand from an operator. In addition, the strategy attempts to maximize battery life. The control strategy has two main stages. The first stage produces a family of control solutions that minimize the power error subject to the battery constraints over an optimization horizon. These solutions are parameterized by a given value for the state of charge at the end of the optimization horizon. The second stage screens the family of control solutions to select one attaining an optimal balance between power error and battery life. The battery life model used in this stage is a weighted Amp-hour (Ah) throughput model. The control strategy is modular, allowing for more sophisticated optimization models in the first stage, or more elaborate battery life models in the second stage. The strategy is implemented in real-time in the framework of Model Predictive Control (MPC).

  8. A Framework for WWW Query Processing

    NASA Technical Reports Server (NTRS)

    Wu, Binghui Helen; Wharton, Stephen (Technical Monitor)

    2000-01-01

    Query processing is the most common operation in a DBMS. Sophisticated query processing has been mainly targeted at a single enterprise environment providing centralized control over data and metadata. Submitting queries by anonymous users on the web is different in such a way that load balancing or DBMS' accessing control becomes the key issue. This paper provides a solution by introducing a framework for WWW query processing. The success of this framework lies in the utilization of query optimization techniques and the ontological approach. This methodology has proved to be cost effective at the NASA Goddard Space Flight Center Distributed Active Archive Center (GDAAC).

  9. Boundary control of bidomain equations with state-dependent switching source functions in the ionic model

    NASA Astrophysics Data System (ADS)

    Chamakuri, Nagaiah; Engwer, Christian; Kunisch, Karl

    2014-09-01

    Optimal control for cardiac electrophysiology based on the bidomain equations in conjunction with the Fenton-Karma ionic model is considered. This generic ventricular model approximates well the restitution properties and spiral wave behavior of more complex ionic models of cardiac action potentials. However, it is challenging due to the appearance of state-dependent discontinuities in the source terms. A computational framework for the numerical realization of optimal control problems is presented. Essential ingredients are a shape calculus based treatment of the sensitivities of the discontinuous source terms and a marching cubes algorithm to track iso-surface of excitation wavefronts. Numerical results exhibit successful defibrillation by applying an optimally controlled extracellular stimulus.

  10. Optimal control for a tuberculosis model with undetected cases in Cameroon

    NASA Astrophysics Data System (ADS)

    Moualeu, D. P.; Weiser, M.; Ehrig, R.; Deuflhard, P.

    2015-03-01

    This paper considers the optimal control of tuberculosis through education, diagnosis campaign and chemoprophylaxis of latently infected. A mathematical model which includes important components such as undiagnosed infectious, diagnosed infectious, latently infected and lost-sight infectious is formulated. The model combines a frequency dependent and a density dependent force of infection for TB transmission. Through optimal control theory and numerical simulations, a cost-effective balance of two different intervention methods is obtained. Seeking to minimize the amount of money the government spends when tuberculosis remain endemic in the Cameroonian population, Pontryagin's maximum principle is used to characterize the optimal control. The optimality system is derived and solved numerically using the forward-backward sweep method (FBSM). Results provide a framework for designing cost-effective strategies for diseases with multiple intervention methods. It comes out that combining chemoprophylaxis and education, the burden of TB can be reduced by 80% in 10 years.

  11. Policy Iteration for $H_\\infty $ Optimal Control of Polynomial Nonlinear Systems via Sum of Squares Programming.

    PubMed

    Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao

    2018-02-01

    Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.

  12. Safe-trajectory optimization and tracking control in ultra-close proximity to a failed satellite

    NASA Astrophysics Data System (ADS)

    Zhang, Jingrui; Chu, Xiaoyu; Zhang, Yao; Hu, Quan; Zhai, Guang; Li, Yanyan

    2018-03-01

    This paper presents a trajectory-optimization method for a chaser spacecraft operating in ultra-close proximity to a failed satellite. Based on the combination of active and passive trajectory protection, the constraints in the optimization framework are formulated for collision avoidance and successful docking in the presence of any thruster failure. The constraints are then handled by an adaptive Gauss pseudospectral method, in which the dynamic residuals are used as the metric to determine the distribution of collocation points. A finite-time feedback control is further employed in tracking the optimized trajectory. In particular, the stability and convergence of the controller are proved. Numerical results are given to demonstrate the effectiveness of the proposed methods.

  13. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  14. Optimal speech motor control and token-to-token variability: a Bayesian modeling approach.

    PubMed

    Patri, Jean-François; Diard, Julien; Perrier, Pascal

    2015-12-01

    The remarkable capacity of the speech motor system to adapt to various speech conditions is due to an excess of degrees of freedom, which enables producing similar acoustical properties with different sets of control strategies. To explain how the central nervous system selects one of the possible strategies, a common approach, in line with optimal motor control theories, is to model speech motor planning as the solution of an optimality problem based on cost functions. Despite the success of this approach, one of its drawbacks is the intrinsic contradiction between the concept of optimality and the observed experimental intra-speaker token-to-token variability. The present paper proposes an alternative approach by formulating feedforward optimal control in a probabilistic Bayesian modeling framework. This is illustrated by controlling a biomechanical model of the vocal tract for speech production and by comparing it with an existing optimal control model (GEPPETO). The essential elements of this optimal control model are presented first. From them the Bayesian model is constructed in a progressive way. Performance of the Bayesian model is evaluated based on computer simulations and compared to the optimal control model. This approach is shown to be appropriate for solving the speech planning problem while accounting for variability in a principled way.

  15. Robust Constrained Optimization Approach to Control Design for International Space Station Centrifuge Rotor Auto Balancing Control System

    NASA Technical Reports Server (NTRS)

    Postma, Barry Dirk

    2005-01-01

    This thesis discusses application of a robust constrained optimization approach to control design to develop an Auto Balancing Controller (ABC) for a centrifuge rotor to be implemented on the International Space Station. The design goal is to minimize a performance objective of the system, while guaranteeing stability and proper performance for a range of uncertain plants. The Performance objective is to minimize the translational response of the centrifuge rotor due to a fixed worst-case rotor imbalance. The robustness constraints are posed with respect to parametric uncertainty in the plant. The proposed approach to control design allows for both of these objectives to be handled within the framework of constrained optimization. The resulting controller achieves acceptable performance and robustness characteristics.

  16. Towards a framework for selection of supervisory control for commercial buildings: HVAC system energy efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan

    This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.

  17. Towards a framework for selection of supervisory control for commercial buildings: HVAC system energy efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramachandran, Thiagarajan; Kundu, Soumya; Chen, Yan

    This paper develops and utilizes an optimization based framework to investigate the maximal energy efficiency potentially attainable by HVAC system operation in a non-predictive context. Performance is evaluated relative to the existing state of the art set-point reset strategies. The expected efficiency increase driven by operation constraints relaxations is evaluated.

  18. Optimal Robust Motion Controller Design Using Multiobjective Genetic Algorithm

    PubMed Central

    Svečko, Rajko

    2014-01-01

    This paper describes the use of a multiobjective genetic algorithm for robust motion controller design. Motion controller structure is based on a disturbance observer in an RIC framework. The RIC approach is presented in the form with internal and external feedback loops, in which an internal disturbance rejection controller and an external performance controller must be synthesised. This paper involves novel objectives for robustness and performance assessments for such an approach. Objective functions for the robustness property of RIC are based on simple even polynomials with nonnegativity conditions. Regional pole placement method is presented with the aims of controllers' structures simplification and their additional arbitrary selection. Regional pole placement involves arbitrary selection of central polynomials for both loops, with additional admissible region of the optimized pole location. Polynomial deviation between selected and optimized polynomials is measured with derived performance objective functions. A multiobjective function is composed of different unrelated criteria such as robust stability, controllers' stability, and time-performance indexes of closed loops. The design of controllers and multiobjective optimization procedure involve a set of the objectives, which are optimized simultaneously with a genetic algorithm—differential evolution. PMID:24987749

  19. Two neural network algorithms for designing optimal terminal controllers with open final time

    NASA Technical Reports Server (NTRS)

    Plumer, Edward S.

    1992-01-01

    Multilayer neural networks, trained by the backpropagation through time algorithm (BPTT), have been used successfully as state-feedback controllers for nonlinear terminal control problems. Current BPTT techniques, however, are not able to deal systematically with open final-time situations such as minimum-time problems. Two approaches which extend BPTT to open final-time problems are presented. In the first, a neural network learns a mapping from initial-state to time-to-go. In the second, the optimal number of steps for each trial run is found using a line-search. Both methods are derived using Lagrange multiplier techniques. This theoretical framework is used to demonstrate that the derived algorithms are direct extensions of forward/backward sweep methods used in N-stage optimal control. The two algorithms are tested on a Zermelo problem and the resulting trajectories compare favorably to optimal control results.

  20. On Market-Based Coordination of Thermostatically Controlled Loads With User Preference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Sen; Zhang, Wei; Lian, Jianming

    2014-12-15

    This paper presents a market-based control framework to coordinate a group of autonomous Thermostatically Controlled Loads (TCL) to achieve the system-level objectives with pricing incentives. The problem is formulated as maximizing the social welfare subject to feeder power constraint. It allows the coordinator to affect the aggregated power of a group of dynamical systems, and creates an interactive market where the users and the coordinator cooperatively determine the optimal energy allocation and energy price. The optimal pricing strategy is derived, which maximizes social welfare while respecting the feeder power constraint. The bidding strategy is also designed to compute the optimalmore » price in real time (e.g., every 5 minutes) based on local device information. The coordination framework is validated with realistic simulations in GridLab-D. Extensive simulation results demonstrate that the proposed approach effectively maximizes the social welfare and decreases power congestion at key times.« less

  1. Numerical approximation for the infinite-dimensional discrete-time optimal linear-quadratic regulator problem

    NASA Technical Reports Server (NTRS)

    Gibson, J. S.; Rosen, I. G.

    1986-01-01

    An abstract approximation framework is developed for the finite and infinite time horizon discrete-time linear-quadratic regulator problem for systems whose state dynamics are described by a linear semigroup of operators on an infinite dimensional Hilbert space. The schemes included the framework yield finite dimensional approximations to the linear state feedback gains which determine the optimal control law. Convergence arguments are given. Examples involving hereditary and parabolic systems and the vibration of a flexible beam are considered. Spline-based finite element schemes for these classes of problems, together with numerical results, are presented and discussed.

  2. Multidisciplinary Design Optimization of A Highly Flexible Aeroservoelastic Wing

    NASA Astrophysics Data System (ADS)

    Haghighat, Sohrab

    A multidisciplinary design optimization framework is developed that integrates control system design with aerostructural design for a highly-deformable wing. The objective of this framework is to surpass the existing aircraft endurance limits through the use of an active load alleviation system designed concurrently with the rest of the aircraft. The novelty of this work is two fold. First, a unified dynamics framework is developed to represent the full six-degree-of-freedom rigid-body along with the structural dynamics. It allows for an integrated control design to account for both manoeuvrability (flying quality) and aeroelasticity criteria simultaneously. Secondly, by synthesizing the aircraft control system along with the structural sizing and aerodynamic shape design, the final design has the potential to exploit synergies among the three disciplines and yield higher performing aircraft. A co-rotational structural framework featuring Euler--Bernoulli beam elements is developed to capture the wing's nonlinear deformations under the effect of aerodynamic and inertial loadings. In this work, a three-dimensional aerodynamic panel code, capable of calculating both steady and unsteady loadings is used. Two different control methods, a model predictive controller (MPC) and a 2-DOF mixed-norm robust controller, are considered in this work to control a highly flexible aircraft. Both control techniques offer unique advantages that make them promising for controlling a highly flexible aircraft. The control system works towards executing time-dependent manoeuvres along with performing gust/manoeuvre load alleviation. The developed framework is investigated for demonstration in two design cases: one in which the control system simply worked towards achieving or maintaining a target altitude, and another where the control system is also performing load alleviation. The use of the active load alleviation system results in a significant improvement in the aircraft performance relative to the optimum result without load alleviation. The results show that the inclusion of control system discipline along with other disciplines at early stages of aircraft design improves aircraft performance. It is also shown that structural stresses due to gust excitations can be better controlled by the use of active structural control systems which can improve the fatigue life of the structure.

  3. A robust model predictive control algorithm for uncertain nonlinear systems that guarantees resolvability

    NASA Technical Reports Server (NTRS)

    Acikmese, Ahmet Behcet; Carson, John M., III

    2006-01-01

    A robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems is developed that guarantees resolvability. With resolvability, initial feasibility of the finite-horizon optimal control problem implies future feasibility in a receding-horizon framework. The control consists of two components; (i) feed-forward, and (ii) feedback part. Feed-forward control is obtained by online solution of a finite-horizon optimal control problem for the nominal system dynamics. The feedback control policy is designed off-line based on a bound on the uncertainty in the system model. The entire controller is shown to be robustly stabilizing with a region of attraction composed of initial states for which the finite-horizon optimal control problem is feasible. The controller design for this algorithm is demonstrated on a class of systems with uncertain nonlinear terms that have norm-bounded derivatives and derivatives in polytopes. An illustrative numerical example is also provided.

  4. Advanced Information Technology in Simulation Based Life Cycle Design

    NASA Technical Reports Server (NTRS)

    Renaud, John E.

    2003-01-01

    In this research a Collaborative Optimization (CO) approach for multidisciplinary systems design is used to develop a decision based design framework for non-deterministic optimization. To date CO strategies have been developed for use in application to deterministic systems design problems. In this research the decision based design (DBD) framework proposed by Hazelrigg is modified for use in a collaborative optimization framework. The Hazelrigg framework as originally proposed provides a single level optimization strategy that combines engineering decisions with business decisions in a single level optimization. By transforming this framework for use in collaborative optimization one can decompose the business and engineering decision making processes. In the new multilevel framework of Decision Based Collaborative Optimization (DBCO) the business decisions are made at the system level. These business decisions result in a set of engineering performance targets that disciplinary engineering design teams seek to satisfy as part of subspace optimizations. The Decision Based Collaborative Optimization framework more accurately models the existing relationship between business and engineering in multidisciplinary systems design.

  5. Evaluation of optimal control type models for the human gunner in an Anti-Aircraft Artillery (AAA) system

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.; Kessler, K. M.

    1975-01-01

    The selection of the structure of optimal control type models for the human gunner in an anti aircraft artillery system is considered. Several structures within the LQG framework may be formulated. Two basic types are considered: (1) kth derivative controllers; and (2) proportional integral derivative (P-I-D) controllers. It is shown that a suitable criterion for model structure determination can be based on the ensemble statistics of the tracking error. In the case when the ensemble tracking steady state error is zero, it is suggested that a P-I-D controller formulation be used in preference to the kth derivative controller.

  6. Adaptive management for subsurface pressure and plume control in application to geological CO2 storage

    NASA Astrophysics Data System (ADS)

    Gonzalez-Nicolas, A.; Cihan, A.; Birkholzer, J. T.; Petrusak, R.; Zhou, Q.; Riestenberg, D. E.; Trautz, R. C.; Godec, M.

    2016-12-01

    Industrial-scale injection of CO2 into the subsurface can cause reservoir pressure increases that must be properly controlled to prevent any potential environmental impact. Excessive pressure buildup in reservoir may result in ground water contamination stemming from leakage through conductive pathways, such as improperly plugged abandoned wells or distant faults, and the potential for fault reactivation and possibly seal breaching. Brine extraction is a viable approach for managing formation pressure, effective stress, and plume movement during industrial-scale CO2 injection projects. The main objectives of this study are to investigate suitable different pressure management strategies involving active brine extraction and passive pressure relief wells. Adaptive optimized management of CO2 storage projects utilizes the advanced automated optimization algorithms and suitable process models. The adaptive management integrates monitoring, forward modeling, inversion modeling and optimization through an iterative process. In this study, we employ an adaptive framework to understand primarily the effects of initial site characterization and frequency of the model update (calibration) and optimization calculations for controlling extraction rates based on the monitoring data on the accuracy and the success of the management without violating pressure buildup constraints in the subsurface reservoir system. We will present results of applying the adaptive framework to test appropriateness of different management strategies for a realistic field injection project.

  7. Stochastic optimal control of ultradiffusion processes with application to dynamic portfolio management

    NASA Astrophysics Data System (ADS)

    Marcozzi, Michael D.

    2008-12-01

    We consider theoretical and approximation aspects of the stochastic optimal control of ultradiffusion processes in the context of a prototype model for the selling price of a European call option. Within a continuous-time framework, the dynamic management of a portfolio of assets is effected through continuous or point control, activation costs, and phase delay. The performance index is derived from the unique weak variational solution to the ultraparabolic Hamilton-Jacobi equation; the value function is the optimal realization of the performance index relative to all feasible portfolios. An approximation procedure based upon a temporal box scheme/finite element method is analyzed; numerical examples are presented in order to demonstrate the viability of the approach.

  8. Regulation of Dynamical Systems to Optimal Solutions of Semidefinite Programs: Algorithms and Applications to AC Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Giannakis, Georgios B.

    2015-07-01

    This paper considers a collection of networked nonlinear dynamical systems, and addresses the synthesis of feedback controllers that seek optimal operating points corresponding to the solution of pertinent network-wide optimization problems. Particular emphasis is placed on the solution of semidefinite programs (SDPs). The design of the feedback controller is grounded on a dual e-subgradient approach, with the dual iterates utilized to dynamically update the dynamical-system reference signals. Global convergence is guaranteed for diminishing stepsize rules, even when the reference inputs are updated at a faster rate than the dynamical-system settling time. The application of the proposed framework to the controlmore » of power-electronic inverters in AC distribution systems is discussed. The objective is to bridge the time-scale separation between real-time inverter control and network-wide optimization. Optimization objectives assume the form of SDP relaxations of prototypical AC optimal power flow problems.« less

  9. Coupling between a multi-physics workflow engine and an optimization framework

    NASA Astrophysics Data System (ADS)

    Di Gallo, L.; Reux, C.; Imbeaux, F.; Artaud, J.-F.; Owsiak, M.; Saoutic, B.; Aiello, G.; Bernardi, P.; Ciraolo, G.; Bucalossi, J.; Duchateau, J.-L.; Fausser, C.; Galassi, D.; Hertout, P.; Jaboulay, J.-C.; Li-Puma, A.; Zani, L.

    2016-03-01

    A generic coupling method between a multi-physics workflow engine and an optimization framework is presented in this paper. The coupling architecture has been developed in order to preserve the integrity of the two frameworks. The objective is to provide the possibility to replace a framework, a workflow or an optimizer by another one without changing the whole coupling procedure or modifying the main content in each framework. The coupling is achieved by using a socket-based communication library for exchanging data between the two frameworks. Among a number of algorithms provided by optimization frameworks, Genetic Algorithms (GAs) have demonstrated their efficiency on single and multiple criteria optimization. Additionally to their robustness, GAs can handle non-valid data which may appear during the optimization. Consequently GAs work on most general cases. A parallelized framework has been developed to reduce the time spent for optimizations and evaluation of large samples. A test has shown a good scaling efficiency of this parallelized framework. This coupling method has been applied to the case of SYCOMORE (SYstem COde for MOdeling tokamak REactor) which is a system code developed in form of a modular workflow for designing magnetic fusion reactors. The coupling of SYCOMORE with the optimization platform URANIE enables design optimization along various figures of merit and constraints.

  10. Intelligent Control of Micro Grid: A Big Data-Based Control Center

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Wang, Yanping; Liu, Li; Wang, Zhiseng

    2018-01-01

    In this paper, a structure of micro grid system with big data-based control center is introduced. Energy data from distributed generation, storage and load are analized through the control center, and from the results new trends will be predicted and applied as a feedback to optimize the control. Therefore, each step proceeded in micro grid can be adjusted and orgnized in a form of comprehensive management. A framework of real-time data collection, data processing and data analysis will be proposed by employing big data technology. Consequently, a integrated distributed generation and a optimized energy storage and transmission process can be implemented in the micro grid system.

  11. Virtual Factory Framework for Supporting Production Planning and Control.

    PubMed

    Kibira, Deogratias; Shao, Guodong

    2017-01-01

    Developing optimal production plans for smart manufacturing systems is challenging because shop floor events change dynamically. A virtual factory incorporating engineering tools, simulation, and optimization generates and communicates performance data to guide wise decision making for different control levels. This paper describes such a platform specifically for production planning. We also discuss verification and validation of the constituent models. A case study of a machine shop is used to demonstrate data generation for production planning in a virtual factory.

  12. Optimal design of green and grey stormwater infrastructure for small urban catchment based on life-cycle cost-effectiveness analysis

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Chui, T. F. M.

    2016-12-01

    Green infrastructure (GI) is identified as sustainable and environmentally friendly alternatives to the conventional grey stormwater infrastructure. Commonly used GI (e.g. green roof, bioretention, porous pavement) can provide multifunctional benefits, e.g. mitigation of urban heat island effects, improvements in air quality. Therefore, to optimize the design of GI and grey drainage infrastructure, it is essential to account for their benefits together with the costs. In this study, a comprehensive simulation-optimization modelling framework that considers the economic and hydro-environmental aspects of GI and grey infrastructure for small urban catchment applications is developed. Several modelling tools (i.e., EPA SWMM model, the WERF BMP and LID Whole Life Cycle Cost Modelling Tools) and optimization solvers are coupled together to assess the life-cycle cost-effectiveness of GI and grey infrastructure, and to further develop optimal stormwater drainage solutions. A typical residential lot in New York City is examined as a case study. The life-cycle cost-effectiveness of various GI and grey infrastructure are first examined at different investment levels. The results together with the catchment parameters are then provided to the optimization solvers, to derive the optimal investment and contributing area of each type of the stormwater controls. The relationship between the investment and optimized environmental benefit is found to be nonlinear. The optimized drainage solutions demonstrate that grey infrastructure is preferred at low total investments while more GI should be adopted at high investments. The sensitivity of the optimized solutions to the prices the stormwater controls is evaluated and is found to be highly associated with their utilizations in the base optimization case. The overall simulation-optimization framework can be easily applied to other sites world-wide, and to be further developed into powerful decision support systems.

  13. Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy

    DOE PAGES

    Rosewater, David; Ferreira, Summer; Schoenwald, David; ...

    2018-01-25

    Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less

  14. Battery Energy Storage State-of-Charge Forecasting: Models, Optimization, and Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosewater, David; Ferreira, Summer; Schoenwald, David

    Battery energy storage systems (BESS) are a critical technology for integrating high penetration renewable power on an intelligent electrical grid. As limited energy restricts the steady-state operational state-of-charge (SoC) of storage systems, SoC forecasting models are used to determine feasible charge and discharge schedules that supply grid services. Smart grid controllers use SoC forecasts to optimize BESS schedules to make grid operation more efficient and resilient. This study presents three advances in BESS state-of-charge forecasting. First, two forecasting models are reformulated to be conducive to parameter optimization. Second, a new method for selecting optimal parameter values based on operational datamore » is presented. Last, a new framework for quantifying model accuracy is developed that enables a comparison between models, systems, and parameter selection methods. The accuracies achieved by both models, on two example battery systems, with each method of parameter selection are then compared in detail. The results of this analysis suggest variation in the suitability of these models for different battery types and applications. Finally, the proposed model formulations, optimization methods, and accuracy assessment framework can be used to improve the accuracy of SoC forecasts enabling better control over BESS charge/discharge schedules.« less

  15. Framework Requirements for MDO Application Development

    NASA Technical Reports Server (NTRS)

    Salas, A. O.; Townsend, J. C.

    1999-01-01

    Frameworks or problem solving environments that support application development form an active area of research. The Multidisciplinary Optimization Branch at NASA Langley Research Center is investigating frameworks for supporting multidisciplinary analysis and optimization research. The Branch has generated a list of framework requirements, based on the experience gained from the Framework for Interdisciplinary Design Optimization project and the information acquired during a framework evaluation process. In this study, four existing frameworks are examined against these requirements. The results of this examination suggest several topics for further framework research.

  16. Mission Data System Java Edition Version 7

    NASA Technical Reports Server (NTRS)

    Reinholtz, William K.; Wagner, David A.

    2013-01-01

    The Mission Data System framework defines closed-loop control system abstractions from State Analysis including interfaces for state variables, goals, estimators, and controllers that can be adapted to implement a goal-oriented control system. The framework further provides an execution environment that includes a goal scheduler, execution engine, and fault monitor that support the expression of goal network activity plans. Using these frameworks, adapters can build a goal-oriented control system where activity coordination is verified before execution begins (plan time), and continually during execution. Plan failures including violations of safety constraints expressed in the plan can be handled through automatic re-planning. This version optimizes a number of key interfaces and features to minimize dependencies, performance overhead, and improve reliability. Fault diagnosis and real-time projection capabilities are incorporated. This version enhances earlier versions primarily through optimizations and quality improvements that raise the technology readiness level. Goals explicitly constrain system states over explicit time intervals to eliminate ambiguity about intent, as compared to command-oriented control that only implies persistent intent until another command is sent. A goal network scheduling and verification process ensures that all goals in the plan are achievable before starting execution. Goal failures at runtime can be detected (including predicted failures) and handled by adapted response logic. Responses can include plan repairs (try an alternate tactic to achieve the same goal), goal shedding, ignoring the fault, cancelling the plan, or safing the system.

  17. Forecasting Electricity Prices in an Optimization Hydrothermal Problem

    NASA Astrophysics Data System (ADS)

    Matías, J. M.; Bayón, L.; Suárez, P.; Argüelles, A.; Taboada, J.

    2007-12-01

    This paper presents an economic dispatch algorithm in a hydrothermal system within the framework of a competitive and deregulated electricity market. The optimization problem of one firm is described, whose objective function can be defined as its profit maximization. Since next-day price forecasting is an aspect crucial, this paper proposes an efficient yet highly accurate next-day price new forecasting method using a functional time series approach trying to exploit the daily seasonal structure of the series of prices. For the optimization problem, an optimal control technique is applied and Pontryagin's theorem is employed.

  18. An Automated DAKOTA and VULCAN-CFD Framework with Application to Supersonic Facility Nozzle Flowpath Optimization

    NASA Technical Reports Server (NTRS)

    Axdahl, Erik L.

    2015-01-01

    Removing human interaction from design processes by using automation may lead to gains in both productivity and design precision. This memorandum describes efforts to incorporate high fidelity numerical analysis tools into an automated framework and applying that framework to applications of practical interest. The purpose of this effort was to integrate VULCAN-CFD into an automated, DAKOTA-enabled framework with a proof-of-concept application being the optimization of supersonic test facility nozzles. It was shown that the optimization framework could be deployed on a high performance computing cluster with the flow of information handled effectively to guide the optimization process. Furthermore, the application of the framework to supersonic test facility nozzle flowpath design and optimization was demonstrated using multiple optimization algorithms.

  19. Optimal control problems of epidemic systems with parameter uncertainties: application to a malaria two-age-classes transmission model with asymptomatic carriers.

    PubMed

    Mwanga, Gasper G; Haario, Heikki; Capasso, Vicenzo

    2015-03-01

    The main scope of this paper is to study the optimal control practices of malaria, by discussing the implementation of a catalog of optimal control strategies in presence of parameter uncertainties, which is typical of infectious diseases data. In this study we focus on a deterministic mathematical model for the transmission of malaria, including in particular asymptomatic carriers and two age classes in the human population. A partial qualitative analysis of the relevant ODE system has been carried out, leading to a realistic threshold parameter. For the deterministic model under consideration, four possible control strategies have been analyzed: the use of Long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic and asymptomatic individuals. The numerical results show that using optimal control the disease can be brought to a stable disease free equilibrium when all four controls are used. The Incremental Cost-Effectiveness Ratio (ICER) for all possible combinations of the disease-control measures is determined. The numerical simulations of the optimal control in the presence of parameter uncertainty demonstrate the robustness of the optimal control: the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the designing of cost-effective strategies for disease controls with multiple interventions, even under considerable uncertainty of model parameters. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. LMI-Based Generation of Feedback Laws for a Robust Model Predictive Control Algorithm

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet; Carson, John M., III

    2007-01-01

    This technical note provides a mathematical proof of Corollary 1 from the paper 'A Nonlinear Model Predictive Control Algorithm with Proven Robustness and Resolvability' that appeared in the 2006 Proceedings of the American Control Conference. The proof was omitted for brevity in the publication. The paper was based on algorithms developed for the FY2005 R&TD (Research and Technology Development) project for Small-body Guidance, Navigation, and Control [2].The framework established by the Corollary is for a robustly stabilizing MPC (model predictive control) algorithm for uncertain nonlinear systems that guarantees the resolvability of the associated nite-horizon optimal control problem in a receding-horizon implementation. Additional details of the framework are available in the publication.

  1. Real-Time Optimal Flood Control Decision Making and Risk Propagation Under Multiple Uncertainties

    NASA Astrophysics Data System (ADS)

    Zhu, Feilin; Zhong, Ping-An; Sun, Yimeng; Yeh, William W.-G.

    2017-12-01

    Multiple uncertainties exist in the optimal flood control decision-making process, presenting risks involving flood control decisions. This paper defines the main steps in optimal flood control decision making that constitute the Forecast-Optimization-Decision Making (FODM) chain. We propose a framework for supporting optimal flood control decision making under multiple uncertainties and evaluate risk propagation along the FODM chain from a holistic perspective. To deal with uncertainties, we employ stochastic models at each link of the FODM chain. We generate synthetic ensemble flood forecasts via the martingale model of forecast evolution. We then establish a multiobjective stochastic programming with recourse model for optimal flood control operation. The Pareto front under uncertainty is derived via the constraint method coupled with a two-step process. We propose a novel SMAA-TOPSIS model for stochastic multicriteria decision making. Then we propose the risk assessment model, the risk of decision-making errors and rank uncertainty degree to quantify the risk propagation process along the FODM chain. We conduct numerical experiments to investigate the effects of flood forecast uncertainty on optimal flood control decision making and risk propagation. We apply the proposed methodology to a flood control system in the Daduhe River basin in China. The results indicate that the proposed method can provide valuable risk information in each link of the FODM chain and enable risk-informed decisions with higher reliability.

  2. Framework for computationally efficient optimal irrigation scheduling using ant colony optimization

    USDA-ARS?s Scientific Manuscript database

    A general optimization framework is introduced with the overall goal of reducing search space size and increasing the computational efficiency of evolutionary algorithm application for optimal irrigation scheduling. The framework achieves this goal by representing the problem in the form of a decisi...

  3. Evaluation of Frameworks for HSCT Design Optimization

    NASA Technical Reports Server (NTRS)

    Krishnan, Ramki

    1998-01-01

    This report is an evaluation of engineering frameworks that could be used to augment, supplement, or replace the existing FIDO 3.5 (Framework for Interdisciplinary Design and Optimization Version 3.5) framework. The report begins with the motivation for this effort, followed by a description of an "ideal" multidisciplinary design and optimization (MDO) framework. The discussion then turns to how each candidate framework stacks up against this ideal. This report ends with recommendations as to the "best" frameworks that should be down-selected for detailed review.

  4. An Ensemble-Based Forecasting Framework to Optimize Reservoir Releases

    NASA Astrophysics Data System (ADS)

    Ramaswamy, V.; Saleh, F.

    2017-12-01

    Increasing frequency of extreme precipitation events are stressing the need to manage water resources on shorter timescales. Short-term management of water resources becomes proactive when inflow forecasts are available and this information can be effectively used in the control strategy. This work investigates the utility of short term hydrological ensemble forecasts for operational decision making during extreme weather events. An advanced automated hydrologic prediction framework integrating a regional scale hydrologic model, GIS datasets and the meteorological ensemble predictions from the European Center for Medium Range Weather Forecasting (ECMWF) was coupled to an implicit multi-objective dynamic programming model to optimize releases from a water supply reservoir. The proposed methodology was evaluated by retrospectively forecasting the inflows to the Oradell reservoir in the Hackensack River basin in New Jersey during the extreme hydrologic event, Hurricane Irene. Additionally, the flexibility of the forecasting framework was investigated by forecasting the inflows from a moderate rainfall event to provide important perspectives on using the framework to assist reservoir operations during moderate events. The proposed forecasting framework seeks to provide a flexible, assistive tool to alleviate the complexity of operational decision-making.

  5. An Inverse Optimal Control Approach to Explain Human Arm Reaching Control Based on Multiple Internal Models.

    PubMed

    Oguz, Ozgur S; Zhou, Zhehua; Glasauer, Stefan; Wollherr, Dirk

    2018-04-03

    Human motor control is highly efficient in generating accurate and appropriate motor behavior for a multitude of tasks. This paper examines how kinematic and dynamic properties of the musculoskeletal system are controlled to achieve such efficiency. Even though recent studies have shown that the human motor control relies on multiple models, how the central nervous system (CNS) controls this combination is not fully addressed. In this study, we utilize an Inverse Optimal Control (IOC) framework in order to find the combination of those internal models and how this combination changes for different reaching tasks. We conducted an experiment where participants executed a comprehensive set of free-space reaching motions. The results show that there is a trade-off between kinematics and dynamics based controllers depending on the reaching task. In addition, this trade-off depends on the initial and final arm configurations, which in turn affect the musculoskeletal load to be controlled. Given this insight, we further provide a discomfort metric to demonstrate its influence on the contribution of different inverse internal models. This formulation together with our analysis not only support the multiple internal models (MIMs) hypothesis but also suggest a hierarchical framework for the control of human reaching motions by the CNS.

  6. Bio-inspired Optimal Locomotion Reconfigurability of Quadruped Rovers using Central Pattern Generators

    NASA Astrophysics Data System (ADS)

    Bohra, Murtaza

    Legged rovers are often considered as viable solutions for traversing unknown terrain. This work addresses the optimal locomotion reconfigurability of quadruped rovers, which consists of obtaining optimal locomotion modes, and transitioning between them. A 2D sagittal plane rover model is considered based on a domestic cat. Using a Genetic Algorithm, the gait, pose and control variables that minimize torque or maximize speed are found separately. The optimization approach takes into account the elimination of leg impact, while considering the entire variable spectrum. The optimal solutions are consistent with other works on gait optimization, and are similar to gaits found in quadruped animals as well. An online model-free gait planning framework is also implemented, that is based on Central Pattern Generators is implemented. It is used to generate joint and control trajectories for any arbitrarily varying speed profile, and shown to regulate locomotion transition and speed modulation, both endogenously and continuously.

  7. Data Transfer Advisor with Transport Profiling Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Liu, Qiang; Yun, Daqing

    The network infrastructures have been rapidly upgraded in many high-performance networks (HPNs). However, such infrastructure investment has not led to corresponding performance improvement in big data transfer, especially at the application layer, largely due to the complexity of optimizing transport control on end hosts. We design and implement ProbData, a PRofiling Optimization Based DAta Transfer Advisor, to help users determine the most effective data transfer method with the most appropriate control parameter values to achieve the best data transfer performance. ProbData employs a profiling optimization based approach to exploit the optimal operational zone of various data transfer methods in supportmore » of big data transfer in extreme scale scientific applications. We present a theoretical framework of the optimized profiling approach employed in ProbData as wellas its detailed design and implementation. The advising procedure and performance benefits of ProbData are illustrated and evaluated by proof-of-concept experiments in real-life networks.« less

  8. Correlations in state space can cause sub-optimal adaptation of optimal feedback control models.

    PubMed

    Aprasoff, Jonathan; Donchin, Opher

    2012-04-01

    Control of our movements is apparently facilitated by an adaptive internal model in the cerebellum. It was long thought that this internal model implemented an adaptive inverse model and generated motor commands, but recently many reject that idea in favor of a forward model hypothesis. In theory, the forward model predicts upcoming state during reaching movements so the motor cortex can generate appropriate motor commands. Recent computational models of this process rely on the optimal feedback control (OFC) framework of control theory. OFC is a powerful tool for describing motor control, it does not describe adaptation. Some assume that adaptation of the forward model alone could explain motor adaptation, but this is widely understood to be overly simplistic. However, an adaptive optimal controller is difficult to implement. A reasonable alternative is to allow forward model adaptation to 're-tune' the controller. Our simulations show that, as expected, forward model adaptation alone does not produce optimal trajectories during reaching movements perturbed by force fields. However, they also show that re-optimizing the controller from the forward model can be sub-optimal. This is because, in a system with state correlations or redundancies, accurate prediction requires different information than optimal control. We find that adding noise to the movements that matches noise found in human data is enough to overcome this problem. However, since the state space for control of real movements is far more complex than in our simple simulations, the effects of correlations on re-adaptation of the controller from the forward model cannot be overlooked.

  9. Finite burn maneuver modeling for a generalized spacecraft trajectory design and optimization system.

    PubMed

    Ocampo, Cesar

    2004-05-01

    The modeling, design, and optimization of finite burn maneuvers for a generalized trajectory design and optimization system is presented. A generalized trajectory design and optimization system is a system that uses a single unified framework that facilitates the modeling and optimization of complex spacecraft trajectories that may operate in complex gravitational force fields, use multiple propulsion systems, and involve multiple spacecraft. The modeling and optimization issues associated with the use of controlled engine burn maneuvers of finite thrust magnitude and duration are presented in the context of designing and optimizing a wide class of finite thrust trajectories. Optimal control theory is used examine the optimization of these maneuvers in arbitrary force fields that are generally position, velocity, mass, and are time dependent. The associated numerical methods used to obtain these solutions involve either, the solution to a system of nonlinear equations, an explicit parameter optimization method, or a hybrid parameter optimization that combines certain aspects of both. The theoretical and numerical methods presented here have been implemented in copernicus, a prototype trajectory design and optimization system under development at the University of Texas at Austin.

  10. Optimization of Process Parameters of Edge Robotic Deburring with Force Control

    NASA Astrophysics Data System (ADS)

    Burghardt, A.; Szybicki, D.; Kurc, K.; Muszyńska, M.

    2016-12-01

    The issues addressed in the paper present a part of the scientific research conducted within the framework of the automation of the aircraft engine part manufacturing processes. The results of the research presented in the article provided information in which tolerances while using a robotic control station with the option of force control we can make edge deburring.

  11. Preface

    DTIC Science & Technology

    2016-09-13

    lems arising, for example, after discretization of optimal control problems. Lucien developed a general framework for quantifying near-optimality...Polak, E., Da Cunha, N.O.: Constrainedminimization under vector valued-criteria in finite dimensional spaces. J. Math . Anal. Appl. 19(1), 103–124...1969) 12. Pironneau, O., Polak, E.: On the rate of convergence of certain methods of centers. Math . Program. 2(2), 230–258 (1972) 13. Polak, E., Sargent

  12. Toward a More Flexible Web-Based Framework for Multidisciplinary Design

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Salas, A. O.

    1999-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary design, is defined as a hardware-software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, monitoring, controlling, and displaying the design process. The objective of this research is to explore how Web technology can improve these areas of weakness and lead toward a more flexible framework. This article describes a Web-based system that optimizes and controls the execution sequence of design processes in addition to monitoring the project status and displaying the design results.

  13. Reinforcement learning for adaptive optimal control of unknown continuous-time nonlinear systems with input constraints

    NASA Astrophysics Data System (ADS)

    Yang, Xiong; Liu, Derong; Wang, Ding

    2014-03-01

    In this paper, an adaptive reinforcement learning-based solution is developed for the infinite-horizon optimal control problem of constrained-input continuous-time nonlinear systems in the presence of nonlinearities with unknown structures. Two different types of neural networks (NNs) are employed to approximate the Hamilton-Jacobi-Bellman equation. That is, an recurrent NN is constructed to identify the unknown dynamical system, and two feedforward NNs are used as the actor and the critic to approximate the optimal control and the optimal cost, respectively. Based on this framework, the action NN and the critic NN are tuned simultaneously, without the requirement for the knowledge of system drift dynamics. Moreover, by using Lyapunov's direct method, the weights of the action NN and the critic NN are guaranteed to be uniformly ultimately bounded, while keeping the closed-loop system stable. To demonstrate the effectiveness of the present approach, simulation results are illustrated.

  14. Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach

    PubMed Central

    Girrbach, Fabian; Hol, Jeroen D.; Bellusci, Giovanni; Diehl, Moritz

    2017-01-01

    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem. PMID:28534857

  15. Optimization-Based Sensor Fusion of GNSS and IMU Using a Moving Horizon Approach.

    PubMed

    Girrbach, Fabian; Hol, Jeroen D; Bellusci, Giovanni; Diehl, Moritz

    2017-05-19

    The rise of autonomous systems operating close to humans imposes new challenges in terms of robustness and precision on the estimation and control algorithms. Approaches based on nonlinear optimization, such as moving horizon estimation, have been shown to improve the accuracy of the estimated solution compared to traditional filter techniques. This paper introduces an optimization-based framework for multi-sensor fusion following a moving horizon scheme. The framework is applied to the often occurring estimation problem of motion tracking by fusing measurements of a global navigation satellite system receiver and an inertial measurement unit. The resulting algorithm is used to estimate position, velocity, and orientation of a maneuvering airplane and is evaluated against an accurate reference trajectory. A detailed study of the influence of the horizon length on the quality of the solution is presented and evaluated against filter-like and batch solutions of the problem. The versatile configuration possibilities of the framework are finally used to analyze the estimated solutions at different evaluation times exposing a nearly linear behavior of the sensor fusion problem.

  16. Optimal Network-based Intervention in the Presence of Undetectable Viruses.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2014-08-01

    This letter presents an optimal control framework to reduce the spread of viruses in networks. The network is modeled as an undirected graph of nodes and weighted links. We consider the spread of viruses in a network as a system, and the total number of infected nodes as the state of the system, while the control function is the weight reduction leading to slow/reduce spread of viruses. Our epidemic model overcomes three assumptions that were extensively used in the literature and produced inaccurate results. We apply the optimal control formulation to crucial network structures. Numerical results show the dynamical weight reduction and reveal the role of the network structure and the epidemic model in reducing the infection size in the presence of indiscernible infected nodes.

  17. Optimal Network-based Intervention in the Presence of Undetectable Viruses

    PubMed Central

    Youssef, Mina; Scoglio, Caterina

    2014-01-01

    This letter presents an optimal control framework to reduce the spread of viruses in networks. The network is modeled as an undirected graph of nodes and weighted links. We consider the spread of viruses in a network as a system, and the total number of infected nodes as the state of the system, while the control function is the weight reduction leading to slow/reduce spread of viruses. Our epidemic model overcomes three assumptions that were extensively used in the literature and produced inaccurate results. We apply the optimal control formulation to crucial network structures. Numerical results show the dynamical weight reduction and reveal the role of the network structure and the epidemic model in reducing the infection size in the presence of indiscernible infected nodes. PMID:25422579

  18. Controllable Construction of Core-Shell Polymer@Zeolitic Imidazolate Frameworks Fiber Derived Heteroatom-Doped Carbon Nanofiber Network for Efficient Oxygen Electrocatalysis.

    PubMed

    Zhao, Yingxuan; Lai, Qingxue; Zhu, Junjie; Zhong, Jia; Tang, Zeming; Luo, Yan; Liang, Yanyu

    2018-05-01

    Designing rational nanostructures of metal-organic frameworks based carbon materials to promote the bifunctional catalytic activity of the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) is highly desired but still remains a great challenge. Herein, an in situ growth method to achieve 1D structure-controllable zeolitic imidazolate frameworks (ZIFs)/polyacrylonitrile (PAN) core/shell fiber (PAN@ZIFs) is developed. Subsequent pyrolysis of this precursor can obtain a heteroatom-doped carbon nanofiber network as an efficient bifunctional oxygen electrocatalyst. The electrocatalytic performance of derived carbon nanofiber is dominated by the structures of PAN@ZIFs fiber, which is facilely regulated by efficiently controlling the nucleation and growth process of ZIFs on the surface of polymer fiber as well as optimizing the components of ZIFs. Benefiting from the core-shell structures with appropriate dopants and porosity, as-prepared catalysts show brilliant bifunctional ORR/OER catalytic activity and durability. Finally, the rechargeable Zn-air battery assembled from the optimized catalyst (CNF@Zn/CoNC) displays a peak power density of 140.1 mW cm -2 , energy density of 878.9 Wh kg Zn -1 , and excellent cyclic stability over 150 h, giving a promising performance in realistic application. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. SoMIR framework for designing high-NDBP photonic crystal waveguides.

    PubMed

    Mirjalili, Seyed Mohammad

    2014-06-20

    This work proposes a modularized framework for designing the structure of photonic crystal waveguides (PCWs) and reducing human involvement during the design process. The proposed framework consists of three main modules: parameters module, constraints module, and optimizer module. The first module is responsible for defining the structural parameters of a given PCW. The second module defines various limitations in order to achieve desirable optimum designs. The third module is the optimizer, in which a numerical optimization method is employed to perform optimization. As case studies, two new structures called Ellipse PCW (EPCW) and Hypoellipse PCW (HPCW) with different shape of holes in each row are proposed and optimized by the framework. The calculation results show that the proposed framework is able to successfully optimize the structures of the new EPCW and HPCW. In addition, the results demonstrate the applicability of the proposed framework for optimizing different PCWs. The results of the comparative study show that the optimized EPCW and HPCW provide 18% and 9% significant improvements in normalized delay-bandwidth product (NDBP), respectively, compared to the ring-shape-hole PCW, which has the highest NDBP in the literature. Finally, the simulations of pulse propagation confirm the manufacturing feasibility of both optimized structures.

  20. Autonomous Energy Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroposki, Benjamin D; Dall-Anese, Emiliano; Bernstein, Andrey

    With much higher levels of distributed energy resources - variable generation, energy storage, and controllable loads just to mention a few - being deployed into power systems, the data deluge from pervasive metering of energy grids, and the shaping of multi-level ancillary-service markets, current frameworks to monitoring, controlling, and optimizing large-scale energy systems are becoming increasingly inadequate. This position paper outlines the concept of 'Autonomous Energy Grids' (AEGs) - systems that are supported by a scalable, reconfigurable, and self-organizing information and control infrastructure, can be extremely secure and resilient (self-healing), and self-optimize themselves in real-time for economic and reliable performancemore » while systematically integrating energy in all forms. AEGs rely on scalable, self-configuring cellular building blocks that ensure that each 'cell' can self-optimize when isolated from a larger grid as well as partaking in the optimal operation of a larger grid when interconnected. To realize this vision, this paper describes the concepts and key research directions in the broad domains of optimization theory, control theory, big-data analytics, and complex system modeling that will be necessary to realize the AEG vision.« less

  1. Decentralized hierarchical partitioning of centralized integrated controllers. [for flight propulsion in STOVLs

    NASA Technical Reports Server (NTRS)

    Schmidt, Phillip; Garg, Sanjay

    1991-01-01

    A framework for a decentralized hierarchical controller partitioning structure is developed. This structure allows for the design of separate airframe and propulsion controllers which, when assembled, will meet the overall design criterion for the integrated airframe/propulsion system. An algorithm based on parameter optimization of the state-space representation for the subsystem controllers is described. The algorithm is currently being applied to an integrated flight propulsion control design example.

  2. Deterministic Design Optimization of Structures in OpenMDAO Framework

    NASA Technical Reports Server (NTRS)

    Coroneos, Rula M.; Pai, Shantaram S.

    2012-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Several such algorithms have been implemented in OpenMDAO framework developed at NASA Glenn Research Center (GRC). OpenMDAO is an open source engineering analysis framework, written in Python, for analyzing and solving Multi-Disciplinary Analysis and Optimization (MDAO) problems. It provides a number of solvers and optimizers, referred to as components and drivers, which users can leverage to build new tools and processes quickly and efficiently. Users may download, use, modify, and distribute the OpenMDAO software at no cost. This paper summarizes the process involved in analyzing and optimizing structural components by utilizing the framework s structural solvers and several gradient based optimizers along with a multi-objective genetic algorithm. For comparison purposes, the same structural components were analyzed and optimized using CometBoards, a NASA GRC developed code. The reliability and efficiency of the OpenMDAO framework was compared and reported in this report.

  3. Panaceas, uncertainty, and the robust control framework in sustainability science

    PubMed Central

    Anderies, John M.; Rodriguez, Armando A.; Janssen, Marco A.; Cifdaloz, Oguzhan

    2007-01-01

    A critical challenge faced by sustainability science is to develop strategies to cope with highly uncertain social and ecological dynamics. This article explores the use of the robust control framework toward this end. After briefly outlining the robust control framework, we apply it to the traditional Gordon–Schaefer fishery model to explore fundamental performance–robustness and robustness–vulnerability trade-offs in natural resource management. We find that the classic optimal control policy can be very sensitive to parametric uncertainty. By exploring a large class of alternative strategies, we show that there are no panaceas: even mild robustness properties are difficult to achieve, and increasing robustness to some parameters (e.g., biological parameters) results in decreased robustness with respect to others (e.g., economic parameters). On the basis of this example, we extract some broader themes for better management of resources under uncertainty and for sustainability science in general. Specifically, we focus attention on the importance of a continual learning process and the use of robust control to inform this process. PMID:17881574

  4. New MHD feedback control schemes using the MARTe framework in RFX-mod

    NASA Astrophysics Data System (ADS)

    Piron, Chiara; Manduchi, Gabriele; Marrelli, Lionello; Piovesan, Paolo; Zanca, Paolo

    2013-10-01

    Real-time feedback control of MHD instabilities is a topic of major interest in magnetic thermonuclear fusion, since it allows to optimize a device performance even beyond its stability bounds. The stability properties of different magnetic configurations are important test benches for real-time control systems. RFX-mod, a Reversed Field Pinch experiment that can also operate as a tokamak, is a well suited device to investigate this topic. It is equipped with a sophisticated magnetic feedback system that controls MHD instabilities and error fields by means of 192 active coils and a corresponding grid of sensors. In addition, the RFX-mod control system has recently gained new potentialities thanks to the introduction of the MARTe framework and of a new CPU architecture. These capabilities allow to study new feedback algorithms relevant to both RFP and tokamak operation and to contribute to the debate on the optimal feedback strategy. This work focuses on the design of new feedback schemes. For this purpose new magnetic sensors have been explored, together with new algorithms that refine the de-aliasing computation of the radial sideband harmonics. The comparison of different sensor and feedback strategy performance is described in both RFP and tokamak experiments.

  5. A Multi-Scale, Multi-Physics Optimization Framework for Additively Manufactured Structural Components

    NASA Astrophysics Data System (ADS)

    El-Wardany, Tahany; Lynch, Mathew; Gu, Wenjiong; Hsu, Arthur; Klecka, Michael; Nardi, Aaron; Viens, Daniel

    This paper proposes an optimization framework enabling the integration of multi-scale / multi-physics simulation codes to perform structural optimization design for additively manufactured components. Cold spray was selected as the additive manufacturing (AM) process and its constraints were identified and included in the optimization scheme. The developed framework first utilizes topology optimization to maximize stiffness for conceptual design. The subsequent step applies shape optimization to refine the design for stress-life fatigue. The component weight was reduced by 20% while stresses were reduced by 75% and the rigidity was improved by 37%. The framework and analysis codes were implemented using Altair software as well as an in-house loading code. The optimized design was subsequently produced by the cold spray process.

  6. Optimal Dynamic Strategies for Index Tracking and Algorithmic Trading

    NASA Astrophysics Data System (ADS)

    Ward, Brian

    In this thesis we study dynamic strategies for index tracking and algorithmic trading. Tracking problems have become ever more important in Financial Engineering as investors seek to precisely control their portfolio risks and exposures over different time horizons. This thesis analyzes various tracking problems and elucidates the tracking errors and strategies one can employ to minimize those errors and maximize profit. In Chapters 2 and 3, we study the empirical tracking properties of exchange traded funds (ETFs), leveraged ETFs (LETFs), and futures products related to spot gold and the Chicago Board Option Exchange (CBOE) Volatility Index (VIX), respectively. These two markets provide interesting and differing examples for understanding index tracking. We find that static strategies work well in the nonleveraged case for gold, but fail to track well in the corresponding leveraged case. For VIX, tracking via neither ETFs, nor futures\\ portfolios succeeds, even in the nonleveraged case. This motivates the need for dynamic strategies, some of which we construct in these two chapters and further expand on in Chapter 4. There, we analyze a framework for index tracking and risk exposure control through financial derivatives. We derive a tracking condition that restricts our exposure choices and also define a slippage process that characterizes the deviations from the index over longer horizons. The framework is applied to a number of models, for example, Black Scholes model and Heston model for equity index tracking, as well as the Square Root (SQR) model and the Concatenated Square Root (CSQR) model for VIX tracking. By specifying how each of these models fall into our framework, we are able to understand the tracking errors in each of these models. Finally, Chapter 5 analyzes a tracking problem of a different kind that arises in algorithmic trading: schedule following for optimal execution. We formulate and solve a stochastic control problem to obtain the optimal trading rates using both market and limit orders. There is a quadratic terminal penalty to ensure complete liquidation as well as a trade speed limiter and trader director to provide better control on the trading rates. The latter two penalties allow the trader to tailor the magnitude and sign (respectively) of the optimal trading rates. We demonstrate the applicability of the model to following a benchmark schedule. In addition, we identify conditions on the model parameters to ensure optimality of the controls and finiteness of the associated value functions. Throughout the chapter, numerical simulations are provided to demonstrate the properties of the optimal trading rates.

  7. Optimal control of quantum rings by terahertz laser pulses.

    PubMed

    Räsänen, E; Castro, A; Werschnik, J; Rubio, A; Gross, E K U

    2007-04-13

    Complete control of single-electron states in a two-dimensional semiconductor quantum-ring model is established, opening a path into coherent laser-driven single-gate qubits. The control scheme is developed in the framework of optimal-control theory for laser pulses of two-component polarization. In terms of pulse lengths and target-state occupations, the scheme is shown to be superior to conventional control methods that exploit Rabi oscillations generated by uniform circularly polarized pulses. Current-carrying states in a quantum ring can be used to manipulate a two-level subsystem at the ring center. Combining our results, we propose a realistic approach to construct a laser-driven single-gate qubit that has switching times in the terahertz regime.

  8. Water-based synthesis of zeolitic imidazolate framework-90 (ZIF-90) with a controllable particle size.

    PubMed

    Shieh, Fa-Kuen; Wang, Shao-Chun; Leo, Sin-Yen; Wu, Kevin C-W

    2013-08-19

    The ZIF code: ZIF-90 materials were successfully synthesized in an optimized water-based system. The particle size, ranging from micro- to nanoscales, could be controlled by different amounts of polyvinylpyrrolidone (PVP), Zn/imidazole-2-carboxaldehyde ratio and alcohol. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. An Automated, Adaptive Framework for Optimizing Preprocessing Pipelines in Task-Based Functional MRI

    PubMed Central

    Churchill, Nathan W.; Spring, Robyn; Afshin-Pour, Babak; Dong, Fan; Strother, Stephen C.

    2015-01-01

    BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the “pipeline”) significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard “fixed” preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets. PMID:26161667

  10. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB.

    PubMed

    Lee, Leng-Feng; Umberger, Brian R

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1-2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility.

  11. Generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB

    PubMed Central

    Lee, Leng-Feng

    2016-01-01

    Computer modeling, simulation and optimization are powerful tools that have seen increased use in biomechanics research. Dynamic optimizations can be categorized as either data-tracking or predictive problems. The data-tracking approach has been used extensively to address human movement problems of clinical relevance. The predictive approach also holds great promise, but has seen limited use in clinical applications. Enhanced software tools would facilitate the application of predictive musculoskeletal simulations to clinically-relevant research. The open-source software OpenSim provides tools for generating tracking simulations but not predictive simulations. However, OpenSim includes an extensive application programming interface that permits extending its capabilities with scripting languages such as MATLAB. In the work presented here, we combine the computational tools provided by MATLAB with the musculoskeletal modeling capabilities of OpenSim to create a framework for generating predictive simulations of musculoskeletal movement based on direct collocation optimal control techniques. In many cases, the direct collocation approach can be used to solve optimal control problems considerably faster than traditional shooting methods. Cyclical and discrete movement problems were solved using a simple 1 degree of freedom musculoskeletal model and a model of the human lower limb, respectively. The problems could be solved in reasonable amounts of time (several seconds to 1–2 hours) using the open-source IPOPT solver. The problems could also be solved using the fmincon solver that is included with MATLAB, but the computation times were excessively long for all but the smallest of problems. The performance advantage for IPOPT was derived primarily by exploiting sparsity in the constraints Jacobian. The framework presented here provides a powerful and flexible approach for generating optimal control simulations of musculoskeletal movement using OpenSim and MATLAB. This should allow researchers to more readily use predictive simulation as a tool to address clinical conditions that limit human mobility. PMID:26835184

  12. Using species distribution models to optimize vector control in the framework of the tsetse eradication campaign in Senegal

    PubMed Central

    Dicko, Ahmadou H.; Lancelot, Renaud; Seck, Momar T.; Guerrini, Laure; Sall, Baba; Lo, Mbargou; Vreysen, Marc J. B.; Lefrançois, Thierry; Fonta, William M.; Peck, Steven L.; Bouyer, Jérémy

    2014-01-01

    Tsetse flies are vectors of human and animal trypanosomoses in sub-Saharan Africa and are the target of the Pan African Tsetse and Trypanosomiasis Eradication Campaign (PATTEC). Glossina palpalis gambiensis (Diptera: Glossinidae) is a riverine species that is still present as an isolated metapopulation in the Niayes area of Senegal. It is targeted by a national eradication campaign combining a population reduction phase based on insecticide-treated targets (ITTs) and cattle and an eradication phase based on the sterile insect technique. In this study, we used species distribution models to optimize control operations. We compared the probability of the presence of G. p. gambiensis and habitat suitability using a regularized logistic regression and Maxent, respectively. Both models performed well, with an area under the curve of 0.89 and 0.92, respectively. Only the Maxent model predicted an expert-based classification of landscapes correctly. Maxent predictions were therefore used throughout the eradication campaign in the Niayes to make control operations more efficient in terms of deployment of ITTs, release density of sterile males, and location of monitoring traps used to assess program progress. We discuss how the models’ results informed about the particular ecology of tsetse in the target area. Maxent predictions allowed optimizing efficiency and cost within our project, and might be useful for other tsetse control campaigns in the framework of the PATTEC and, more generally, other vector or insect pest control programs. PMID:24982143

  13. Using species distribution models to optimize vector control in the framework of the tsetse eradication campaign in Senegal.

    PubMed

    Dicko, Ahmadou H; Lancelot, Renaud; Seck, Momar T; Guerrini, Laure; Sall, Baba; Lo, Mbargou; Vreysen, Marc J B; Lefrançois, Thierry; Fonta, William M; Peck, Steven L; Bouyer, Jérémy

    2014-07-15

    Tsetse flies are vectors of human and animal trypanosomoses in sub-Saharan Africa and are the target of the Pan African Tsetse and Trypanosomiasis Eradication Campaign (PATTEC). Glossina palpalis gambiensis (Diptera: Glossinidae) is a riverine species that is still present as an isolated metapopulation in the Niayes area of Senegal. It is targeted by a national eradication campaign combining a population reduction phase based on insecticide-treated targets (ITTs) and cattle and an eradication phase based on the sterile insect technique. In this study, we used species distribution models to optimize control operations. We compared the probability of the presence of G. p. gambiensis and habitat suitability using a regularized logistic regression and Maxent, respectively. Both models performed well, with an area under the curve of 0.89 and 0.92, respectively. Only the Maxent model predicted an expert-based classification of landscapes correctly. Maxent predictions were therefore used throughout the eradication campaign in the Niayes to make control operations more efficient in terms of deployment of ITTs, release density of sterile males, and location of monitoring traps used to assess program progress. We discuss how the models' results informed about the particular ecology of tsetse in the target area. Maxent predictions allowed optimizing efficiency and cost within our project, and might be useful for other tsetse control campaigns in the framework of the PATTEC and, more generally, other vector or insect pest control programs.

  14. Controlling cell-free metabolism through physiochemical perturbations.

    PubMed

    Karim, Ashty S; Heggestad, Jacob T; Crowe, Samantha A; Jewett, Michael C

    2018-01-01

    Building biosynthetic pathways and engineering metabolic reactions in cells can be time-consuming due to complexities in cellular metabolism. These complexities often convolute the combinatorial testing of biosynthetic pathway designs needed to define an optimal biosynthetic system. To simplify the optimization of biosynthetic systems, we recently reported a new cell-free framework for pathway construction and testing. In this framework, multiple crude-cell extracts are selectively enriched with individual pathway enzymes, which are then mixed to construct full biosynthetic pathways on the time scale of a day. This rapid approach to building pathways aids in the study of metabolic pathway performance by providing a unique freedom of design to modify and control biological systems for both fundamental and applied biotechnology. The goal of this work was to demonstrate the ability to probe biosynthetic pathway performance in our cell-free framework by perturbing physiochemical conditions, using n-butanol synthesis as a model. We carried out three unique case studies. First, we demonstrated the power of our cell-free approach to maximize biosynthesis yields by mapping physiochemical landscapes using a robotic liquid-handler. This allowed us to determine that NAD and CoA are the most important factors that govern cell-free n-butanol metabolism. Second, we compared metabolic profile differences between two different approaches for building pathways from enriched lysates, heterologous expression and cell-free protein synthesis. We discover that phosphate from PEP utilization, along with other physiochemical reagents, during cell-free protein synthesis-coupled, crude-lysate metabolic system operation inhibits optimal cell-free n-butanol metabolism. Third, we show that non-phosphorylated secondary energy substrates can be used to fuel cell-free protein synthesis and n-butanol biosynthesis. Taken together, our work highlights the ease of using cell-free systems to explore physiochemical perturbations and suggests the need for a more controllable, multi-step, separated cell-free framework for future pathway prototyping and enzyme discovery efforts. Copyright © 2017 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  15. Engineering Change Management Method Framework in Mechanical Engineering

    NASA Astrophysics Data System (ADS)

    Stekolschik, Alexander

    2016-11-01

    Engineering changes make an impact on different process chains in and outside the company, and lead to most error costs and time shifts. In fact, 30 to 50 per cent of development costs result from technical changes. Controlling engineering change processes can help us to avoid errors and risks, and contribute to cost optimization and a shorter time to market. This paper presents a method framework for controlling engineering changes at mechanical engineering companies. The developed classification of engineering changes and accordingly process requirements build the basis for the method framework. The developed method framework comprises two main areas: special data objects managed in different engineering IT tools and process framework. Objects from both areas are building blocks that can be selected to the overall business process based on the engineering process type and change classification. The process framework contains steps for the creation of change objects (both for overall change and for parts), change implementation, and release. Companies can select singleprocess building blocks from the framework, depending on the product development process and change impact. The developed change framework has been implemented at a division (10,000 employees) of a big German mechanical engineering company.

  16. Optimal control of a hybrid rhythmic-discrete task: the bouncing ball revisited.

    PubMed

    Ronsse, Renaud; Wei, Kunlin; Sternad, Dagmar

    2010-05-01

    Rhythmically bouncing a ball with a racket is a hybrid task that combines continuous rhythmic actuation of the racket with the control of discrete impact events between racket and ball. This study presents experimental data and a two-layered modeling framework that explicitly addresses the hybrid nature of control: a first discrete layer calculates the state to reach at impact and the second continuous layer smoothly drives the racket to this desired state, based on optimality principles. The testbed for this hybrid model is task performance at a range of increasingly slower tempos. When slowing the rhythm of the bouncing actions, the continuous cycles become separated into a sequence of discrete movements interspersed by dwell times and directed to achieve the desired impact. Analyses of human performance show increasing variability of performance measures with slower tempi, associated with a change in racket trajectories from approximately sinusoidal to less symmetrical velocity profiles. Matching results of model simulations give support to a hybrid control model based on optimality, and therefore suggest that optimality principles are applicable to the sensorimotor control of complex movements such as ball bouncing.

  17. An Analysis of the Hidden Costs of Competition in the Procurement of Spare Parts at the Navy Ships Parts Control Center: A Framework for Process Improvement

    DTIC Science & Technology

    1992-03-01

    setting of sub- optimal goals and quotas, barriers between departments, and awarding contracts primarily on price are all anti-TQM practices that hinder...customer focus, the setting of sub- optimal goals and quotas, barriers between departments, and awarding contracts primarily on price are all anti-TQM/L...surveys are often required to determine if a lower competitive price could be achieved before exercising options. This requirement is a sub- optimal

  18. Optimal placement of excitations and sensors for verification of large dynamical systems

    NASA Technical Reports Server (NTRS)

    Salama, M.; Rose, T.; Garba, J.

    1987-01-01

    The computationally difficult problem of the optimal placement of excitations and sensors to maximize the observed measurements is studied within the framework of combinatorial optimization, and is solved numerically using a variation of the simulated annealing heuristic algorithm. Results of numerical experiments including a square plate and a 960 degrees-of-freedom Control of Flexible Structure (COFS) truss structure, are presented. Though the algorithm produces suboptimal solutions, its generality and simplicity allow the treatment of complex dynamical systems which would otherwise be difficult to handle.

  19. A Hierarchical Framework for Demand-Side Frequency Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moya, Christian; Zhang, Wei; Lian, Jianming

    2014-06-02

    With large-scale plans to integrate renewable generation, more resources will be needed to compensate for the uncertainty associated with intermittent generation resources. Under such conditions, performing frequency control using only supply-side resources become not only prohibitively expensive but also technically difficult. It is therefore important to explore how a sufficient proportion of the loads could assume a routine role in frequency control to maintain the stability of the system at an acceptable cost. In this paper, a novel hierarchical decentralized framework for frequency based load control is proposed. The framework involves two decision layers. The top decision layer determines themore » optimal droop gain required from the aggregated load response on each bus using a robust decentralized control approach. The second layer consists of a large number of devices, which switch probabilistically during contingencies so that the aggregated power change matches the desired droop amount according to the updated gains. The proposed framework is based on the classical nonlinear multi-machine power system model, and can deal with timevarying system operating conditions while respecting the physical constraints of individual devices. Realistic simulation results based on a 68-bus system are provided to demonstrate the effectiveness of the proposed strategy.« less

  20. Removing Barriers for Effective Deployment of Intermittent Renewable Generation

    NASA Astrophysics Data System (ADS)

    Arabali, Amirsaman

    The stochastic nature of intermittent renewable resources is the main barrier to effective integration of renewable generation. This problem can be studied from feeder-scale and grid-scale perspectives. Two new stochastic methods are proposed to meet the feeder-scale controllable load with a hybrid renewable generation (including wind and PV) and energy storage system. For the first method, an optimization problem is developed whose objective function is the cost of the hybrid system including the cost of renewable generation and storage subject to constraints on energy storage and shifted load. A smart-grid strategy is developed to shift the load and match the renewable energy generation and controllable load. Minimizing the cost function guarantees minimum PV and wind generation installation, as well as storage capacity selection for supplying the controllable load. A confidence coefficient is allocated to each stochastic constraint which shows to what degree the constraint is satisfied. In the second method, a stochastic framework is developed for optimal sizing and reliability analysis of a hybrid power system including renewable resources (PV and wind) and energy storage system. The hybrid power system is optimally sized to satisfy the controllable load with a specified reliability level. A load-shifting strategy is added to provide more flexibility for the system and decrease the installation cost. Load shifting strategies and their potential impacts on the hybrid system reliability/cost analysis are evaluated trough different scenarios. Using a compromise-solution method, the best compromise between the reliability and cost will be realized for the hybrid system. For the second problem, a grid-scale stochastic framework is developed to examine the storage application and its optimal placement for the social cost and transmission congestion relief of wind integration. Storage systems are optimally placed and adequately sized to minimize the sum of operation and congestion costs over a scheduling period. A technical assessment framework is developed to enhance the efficiency of wind integration and evaluate the economics of storage technologies and conventional gas-fired alternatives. The proposed method is used to carry out a cost-benefit analysis for the IEEE 24-bus system and determine the most economical technology. In order to mitigate the financial and technical concerns of renewable energy integration into the power system, a stochastic framework is proposed for transmission grid reinforcement studies in a power system with wind generation. A multi-stage multi-objective transmission network expansion planning (TNEP) methodology is developed which considers the investment cost, absorption of private investment and reliability of the system as the objective functions. A Non-dominated Sorting Genetic Algorithm (NSGA II) optimization approach is used in combination with a probabilistic optimal power flow (POPF) to determine the Pareto optimal solutions considering the power system uncertainties. Using a compromise-solution method, the best final plan is then realized based on the decision maker preferences. The proposed methodology is applied to the IEEE 24-bus Reliability Tests System (RTS) to evaluate the feasibility and practicality of the developed planning strategy.

  1. A Web-Based System for Monitoring and Controlling Multidisciplinary Design Projects

    NASA Technical Reports Server (NTRS)

    Salas, Andrea O.; Rogers, James L.

    1997-01-01

    In today's competitive environment, both industry and government agencies are under enormous pressure to reduce the time and cost of multidisciplinary design projects. A number of frameworks have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. An examination of current frameworks reveals weaknesses in various areas such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, in conjunction with an existing framework, can improve these areas of weakness. This paper describes a system that executes a sequence of programs, monitors and controls the design process through a Web-based interface, and visualizes intermediate and final results through the use of Java(Tm) applets. A small sample problem, which includes nine processes with two analysis programs that are coupled to an optimizer, is used to demonstrate the feasibility of this approach.

  2. Design of clinical trials involving multiple hypothesis tests with a common control.

    PubMed

    Schou, I Manjula; Marschner, Ian C

    2017-07-01

    Randomized clinical trials comparing several treatments to a common control are often reported in the medical literature. For example, multiple experimental treatments may be compared with placebo, or in combination therapy trials, a combination therapy may be compared with each of its constituent monotherapies. Such trials are typically designed using a balanced approach in which equal numbers of individuals are randomized to each arm, however, this can result in an inefficient use of resources. We provide a unified framework and new theoretical results for optimal design of such single-control multiple-comparator studies. We consider variance optimal designs based on D-, A-, and E-optimality criteria, using a general model that allows for heteroscedasticity and a range of effect measures that include both continuous and binary outcomes. We demonstrate the sensitivity of these designs to the type of optimality criterion by showing that the optimal allocation ratios are systematically ordered according to the optimality criterion. Given this sensitivity to the optimality criterion, we argue that power optimality is a more suitable approach when designing clinical trials where testing is the objective. Weighted variance optimal designs are also discussed, which, like power optimal designs, allow the treatment difference to play a major role in determining allocation ratios. We illustrate our methods using two real clinical trial examples taken from the medical literature. Some recommendations on the use of optimal designs in single-control multiple-comparator trials are also provided. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Optimal control of malaria: combining vector interventions and drug therapies.

    PubMed

    Khamis, Doran; El Mouden, Claire; Kura, Klodeta; Bonsall, Michael B

    2018-04-24

    The sterile insect technique and transgenic equivalents are considered promising tools for controlling vector-borne disease in an age of increasing insecticide and drug-resistance. Combining vector interventions with artemisinin-based therapies may achieve the twin goals of suppressing malaria endemicity while managing artemisinin resistance. While the cost-effectiveness of these controls has been investigated independently, their combined usage has not been dynamically optimized in response to ecological and epidemiological processes. An optimal control framework based on coupled models of mosquito population dynamics and malaria epidemiology is used to investigate the cost-effectiveness of combining vector control with drug therapies in homogeneous environments with and without vector migration. The costs of endemic malaria are weighed against the costs of administering artemisinin therapies and releasing modified mosquitoes using various cost structures. Larval density dependence is shown to reduce the cost-effectiveness of conventional sterile insect releases compared with transgenic mosquitoes with a late-acting lethal gene. Using drug treatments can reduce the critical vector control release ratio necessary to cause disease fadeout. Combining vector control and drug therapies is the most effective and efficient use of resources, and using optimized implementation strategies can substantially reduce costs.

  4. A Fuzzy Logic Optimal Control Law Solution to the CMMCA Tracking Problem

    DTIC Science & Technology

    1993-03-01

    or from a transfer function. Many times, however, the resulting algorithms are so complex as to be completely or essentially useless. Applications...implemented in a nearly real time computer simulation. Located within the LQ framework are all the performance data for both the ClMCA and the CX...repuired nor desired. 34 - / k more general and less exacting framework was used. In order to concentrate on tho theory and problem solution, it was

  5. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  6. An ICA-based method for the identification of optimal FMRI features and components using combined group-discriminative techniques

    PubMed Central

    Sui, Jing; Adali, Tülay; Pearlson, Godfrey D.; Calhoun, Vince D.

    2013-01-01

    Extraction of relevant features from multitask functional MRI (fMRI) data in order to identify potential biomarkers for disease, is an attractive goal. In this paper, we introduce a novel feature-based framework, which is sensitive and accurate in detecting group differences (e.g. controls vs. patients) by proposing three key ideas. First, we integrate two goal-directed techniques: coefficient-constrained independent component analysis (CC-ICA) and principal component analysis with reference (PCA-R), both of which improve sensitivity to group differences. Secondly, an automated artifact-removal method is developed for selecting components of interest derived from CC-ICA, with an average accuracy of 91%. Finally, we propose a strategy for optimal feature/component selection, aiming to identify optimal group-discriminative brain networks as well as the tasks within which these circuits are engaged. The group-discriminating performance is evaluated on 15 fMRI feature combinations (5 single features and 10 joint features) collected from 28 healthy control subjects and 25 schizophrenia patients. Results show that a feature from a sensorimotor task and a joint feature from a Sternberg working memory (probe) task and an auditory oddball (target) task are the top two feature combinations distinguishing groups. We identified three optimal features that best separate patients from controls, including brain networks consisting of temporal lobe, default mode and occipital lobe circuits, which when grouped together provide improved capability in classifying group membership. The proposed framework provides a general approach for selecting optimal brain networks which may serve as potential biomarkers of several brain diseases and thus has wide applicability in the neuroimaging research community. PMID:19457398

  7. Short-term Operation of Multi-purpose Reservoir using Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Uysal, Gokcen; Schwanenberg, Dirk; Alvarado Montero, Rodolfo; Sensoy, Aynur; Arda Sorman, Ali

    2017-04-01

    Operation of water structures especially with conflicting water supply and flood mitigation objectives is under more stress attributed to growing water demand and changing hydro-climatic conditions. Model Predictive Control (MPC) based optimal control solutions has been successfully applied to different water resources applications. In this study, Feedback Control (FBC) and MPC get combined and an improved joint optimization-simulation operating scheme is proposed. Water supply and flood control objectives are fulfilled by incorporating the long term water supply objectives into a time-dependent variable guide curve policy whereas the extreme floods are attenuated by means of short-term optimization based on MPC. A final experiment is carried out to assess the lead time performance and reliability of forecasts in a hindcasting experiment with imperfect, perturbed forecasts. The framework is tested in Yuvacık Dam reservoir where the main water supply reservoir of Kocaeli City in the northwestern part of Turkey (the Marmara region) and it requires a challenging gate operation due to restricted downstream flow conditions.

  8. Optimal Navigation of Self-Propelled Colloids in Microstructured Mazes

    NASA Astrophysics Data System (ADS)

    Yang, Yuguang; Bevan, Michael

    Controlling navigation of self-propelled microscopic `robots' subject to random Brownian motion in complex microstructured environments (e.g., porous media, tumor vasculature) is important to many emerging applications (e.g., enhanced oil recovery, drug delivery). In this work, we design an optimal feedback policy to navigate an active self-propelled colloidal rod in complex mazes with various obstacle types. Actuation of the rods is modelled based on a light-controlled osmotic flow mechanism, which produces different propulsion velocities along the rod's long axis. Actuator-parameterized Langevin equations, with soft rod-obstacle repulsive interactions, are developed to describe the system dynamics. A Markov decision process (MDP) framework is used for optimal policy calculations with design goals of colloidal rods reaching target end points in minimum time. Simulations show that optimal MDP-based policies are able to control rod trajectories to reach target regions order-of-magnitudes faster than uncontrolled rods, which diverges as maze complexity increases. An efficient multi-graph based implementation for MDP is also presented, which scales linearly with the maze dimension.

  9. Lie theory and control systems defined on spheres

    NASA Technical Reports Server (NTRS)

    Brockett, R. W.

    1972-01-01

    It is shown that in constructing a theory for the most elementary class of control problems defined on spheres, some results from the Lie theory play a natural role. To understand controllability, optimal control, and certain properties of stochastic equations, Lie theoretic ideas are needed. The framework considered here is the most natural departure from the usual linear system/vector space problems which have dominated control systems literature. For this reason results are compared with those previously available for the finite dimensional vector space case.

  10. First-Order Frameworks for Managing Models in Engineering Optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natlia M.; Lewis, Robert Michael

    2000-01-01

    Approximation/model management optimization (AMMO) is a rigorous methodology for attaining solutions of high-fidelity optimization problems with minimal expense in high- fidelity function and derivative evaluation. First-order AMMO frameworks allow for a wide variety of models and underlying optimization algorithms. Recent demonstrations with aerodynamic optimization achieved three-fold savings in terms of high- fidelity function and derivative evaluation in the case of variable-resolution models and five-fold savings in the case of variable-fidelity physics models. The savings are problem dependent but certain trends are beginning to emerge. We give an overview of the first-order frameworks, current computational results, and an idea of the scope of the first-order framework applicability.

  11. ProjectQ: Compiling quantum programs for various backends

    NASA Astrophysics Data System (ADS)

    Haener, Thomas; Steiger, Damian S.; Troyer, Matthias

    In order to control quantum computers beyond the current generation, a high level quantum programming language and optimizing compilers will be essential. Therefore, we have developed ProjectQ - an open source software framework to facilitate implementing and running quantum algorithms both in software and on actual quantum hardware. Here, we introduce the backends available in ProjectQ. This includes a high-performance simulator and emulator to test and debug quantum algorithms, tools for resource estimation, and interfaces to several small-scale quantum devices. We demonstrate the workings of the framework and show how easily it can be further extended to control upcoming quantum hardware.

  12. Multi-Disciplinary Analysis and Optimization Frameworks

    NASA Technical Reports Server (NTRS)

    Naiman, Cynthia Gutierrez

    2009-01-01

    Since July 2008, the Multidisciplinary Analysis & Optimization Working Group (MDAO WG) of the Systems Analysis Design & Optimization (SAD&O) discipline in the Fundamental Aeronautics Program s Subsonic Fixed Wing (SFW) project completed one major milestone, Define Architecture & Interfaces for Next Generation Open Source MDAO Framework Milestone (9/30/08), and is completing the Generation 1 Framework validation milestone, which is due December 2008. Included in the presentation are: details of progress on developing the Open MDAO framework, modeling and testing the Generation 1 Framework, progress toward establishing partnerships with external parties, and discussion of additional potential collaborations

  13. Multiobjective optimization of temporal processes.

    PubMed

    Song, Zhe; Kusiak, Andrew

    2010-06-01

    This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework.

  14. Nanomaterials derived from metal-organic frameworks

    NASA Astrophysics Data System (ADS)

    Dang, Song; Zhu, Qi-Long; Xu, Qiang

    2018-01-01

    The thermal transformation of metal-organic frameworks (MOFs) generates a variety of nanostructured materials, including carbon-based materials, metal oxides, metal chalcogenides, metal phosphides and metal carbides. These derivatives of MOFs have characteristics such as high surface areas, permanent porosities and controllable functionalities that enable their good performance in sensing, gas storage, catalysis and energy-related applications. Although progress has been made to tune the morphologies of MOF-derived structures at the nanometre scale, it remains crucial to further our knowledge of the relationship between morphology and performance. In this Review, we summarize the synthetic strategies and optimized methods that enable control over the size, morphology, composition and structure of the derived nanomaterials. In addition, we compare the performance of materials prepared by the MOF-templated strategy and other synthetic methods. Our aim is to reveal the relationship between the morphology and the physico-chemical properties of MOF-derived nanostructures to optimize their performance for applications such as sensing, catalysis, and energy storage and conversion.

  15. Optimal control in microgrid using multi-agent reinforcement learning.

    PubMed

    Li, Fu-Dong; Wu, Min; He, Yong; Chen, Xin

    2012-11-01

    This paper presents an improved reinforcement learning method to minimize electricity costs on the premise of satisfying the power balance and generation limit of units in a microgrid with grid-connected mode. Firstly, the microgrid control requirements are analyzed and the objective function of optimal control for microgrid is proposed. Then, a state variable "Average Electricity Price Trend" which is used to express the most possible transitions of the system is developed so as to reduce the complexity and randomicity of the microgrid, and a multi-agent architecture including agents, state variables, action variables and reward function is formulated. Furthermore, dynamic hierarchical reinforcement learning, based on change rate of key state variable, is established to carry out optimal policy exploration. The analysis shows that the proposed method is beneficial to handle the problem of "curse of dimensionality" and speed up learning in the unknown large-scale world. Finally, the simulation results under JADE (Java Agent Development Framework) demonstrate the validity of the presented method in optimal control for a microgrid with grid-connected mode. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Finding optimal vaccination strategies under parameter uncertainty using stochastic programming.

    PubMed

    Tanner, Matthew W; Sattenspiel, Lisa; Ntaimo, Lewis

    2008-10-01

    We present a stochastic programming framework for finding the optimal vaccination policy for controlling infectious disease epidemics under parameter uncertainty. Stochastic programming is a popular framework for including the effects of parameter uncertainty in a mathematical optimization model. The problem is initially formulated to find the minimum cost vaccination policy under a chance-constraint. The chance-constraint requires that the probability that R(*)

  17. Dynamic programming methods for concurrent design and dynamic allocation of vehicles embedded in a system-of-systems

    NASA Astrophysics Data System (ADS)

    Nusawardhana

    2007-12-01

    Recent developments indicate a changing perspective on how systems or vehicles should be designed. Such transition comes from the way decision makers in defense related agencies address complex problems. Complex problems are now often posed in terms of the capabilities desired, rather than in terms of requirements for a single systems. As a result, the way to provide a set of capabilities is through a collection of several individual, independent systems. This collection of individual independent systems is often referred to as a "System of Systems'' (SoS). Because of the independent nature of the constituent systems in an SoS, approaches to design an SoS, and more specifically, approaches to design a new system as a member of an SoS, will likely be different than the traditional design approaches for complex, monolithic (meaning the constituent parts have no ability for independent operation) systems. Because a system of system evolves over time, this simultaneous system design and resource allocation problem should be investigated in a dynamic context. Such dynamic optimization problems are similar to conventional control problems. However, this research considers problems which not only seek optimizing policies but also seek the proper system or vehicle to operate under these policies. This thesis presents a framework and a set of analytical tools to solve a class of SoS problems that involves the simultaneous design of a new system and allocation of the new system along with existing systems. Such a class of problems belongs to the problems of concurrent design and control of a new systems with solutions consisting of both optimal system design and optimal control strategy. Rigorous mathematical arguments show that the proposed framework solves the concurrent design and control problems. Many results exist for dynamic optimization problems of linear systems. In contrary, results on optimal nonlinear dynamic optimization problems are rare. The proposed framework is equipped with the set of analytical tools to solve several cases of nonlinear optimal control problems: continuous- and discrete-time nonlinear problems with applications on both optimal regulation and tracking. These tools are useful when mathematical descriptions of dynamic systems are available. In the absence of such a mathematical model, it is often necessary to derive a solution based on computer simulation. For this case, a set of parameterized decision may constitute a solution. This thesis presents a method to adjust these parameters based on the principle of stochastic approximation simultaneous perturbation using continuous measurements. The set of tools developed here mostly employs the methods of exact dynamic programming. However, due to the complexity of SoS problems, this research also develops suboptimal solution approaches, collectively recognized as approximate dynamic programming solutions, for large scale problems. The thesis presents, explores, and solves problems from an airline industry, in which a new aircraft is to be designed and allocated along with an existing fleet of aircraft. Because the life cycle of an aircraft is on the order of 10 to 20 years, this problem is to be addressed dynamically so that the new aircraft design is the best design for the fleet over a given time horizon.

  18. A reliable algorithm for optimal control synthesis

    NASA Technical Reports Server (NTRS)

    Vansteenwyk, Brett; Ly, Uy-Loi

    1992-01-01

    In recent years, powerful design tools for linear time-invariant multivariable control systems have been developed based on direct parameter optimization. In this report, an algorithm for reliable optimal control synthesis using parameter optimization is presented. Specifically, a robust numerical algorithm is developed for the evaluation of the H(sup 2)-like cost functional and its gradients with respect to the controller design parameters. The method is specifically designed to handle defective degenerate systems and is based on the well-known Pade series approximation of the matrix exponential. Numerical test problems in control synthesis for simple mechanical systems and for a flexible structure with densely packed modes illustrate positively the reliability of this method when compared to a method based on diagonalization. Several types of cost functions have been considered: a cost function for robust control consisting of a linear combination of quadratic objectives for deterministic and random disturbances, and one representing an upper bound on the quadratic objective for worst case initial conditions. Finally, a framework for multivariable control synthesis has been developed combining the concept of closed-loop transfer recovery with numerical parameter optimization. The procedure enables designers to synthesize not only observer-based controllers but also controllers of arbitrary order and structure. Numerical design solutions rely heavily on the robust algorithm due to the high order of the synthesis model and the presence of near-overlapping modes. The design approach is successfully applied to the design of a high-bandwidth control system for a rotorcraft.

  19. Charging, power management, and battery degradation mitigation in plug-in hybrid electric vehicles: A unified cost-optimal approach

    NASA Astrophysics Data System (ADS)

    Hu, Xiaosong; Martinez, Clara Marina; Yang, Yalian

    2017-03-01

    Holistic energy management of plug-in hybrid electric vehicles (PHEVs) in smart grid environment constitutes an enormous control challenge. This paper responds to this challenge by investigating the interactions among three important control tasks, i.e., charging, on-road power management, and battery degradation mitigation, in PHEVs. Three notable original contributions distinguish our work from existing endeavors. First, a new convex programming (CP)-based cost-optimal control framework is constructed to minimize the daily operational expense of a PHEV, which seamlessly integrates costs of the three tasks. Second, a straightforward but useful sensitivity assessment of the optimization outcome is executed with respect to price changes of battery and energy carriers. The potential impact of vehicle-to-grid (V2G) power flow on the PHEV economy is eventually analyzed through a multitude of comparative studies.

  20. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  1. A Novel Strategy Using Factor Graphs and the Sum-Product Algorithm for Satellite Broadcast Scheduling Problems

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh

    This paper presents a low complexity algorithmic framework for finding a broadcasting schedule in a low-altitude satellite system, i. e., the satellite broadcast scheduling (SBS) problem, based on the recent modeling and computational methodology of factor graphs. Inspired by the huge success of the low density parity check (LDPC) codes in the field of error control coding, in this paper, we transform the SBS problem into an LDPC-like problem through a factor graph instead of using the conventional neural network approaches to solve the SBS problem. Based on a factor graph framework, the soft-information, describing the probability that each satellite will broadcast information to a terminal at a specific time slot, is exchanged among the local processing in the proposed framework via the sum-product algorithm to iteratively optimize the satellite broadcasting schedule. Numerical results show that the proposed approach not only can obtain optimal solution but also enjoys the low complexity suitable for integral-circuit implementation.

  2. Design and development of bio-inspired framework for reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Asvini, M. Sakthi; Amudha, T.

    2017-12-01

    Frameworks for optimal reservoir operation play an important role in the management of water resources and delivery of economic benefits. Effective utilization and conservation of water from reservoirs helps to manage water deficit periods. The main challenge in reservoir optimization is to design operating rules that can be used to inform real-time decisions on reservoir release. We develop a bio-inspired framework for the optimization of reservoir release to satisfy the diverse needs of various stakeholders. In this work, single-objective optimization and multiobjective optimization problems are formulated using an algorithm known as "strawberry optimization" and tested with actual reservoir data. Results indicate that well planned reservoir operations lead to efficient deployment of the reservoir water with the help of optimal release patterns.

  3. Design of crashworthy structures with controlled behavior in HCA framework

    NASA Astrophysics Data System (ADS)

    Bandi, Punit

    The field of crashworthiness design is gaining more interest and attention from automakers around the world due to increasing competition and tighter safety norms. In the last two decades, topology and topometry optimization methods from structural optimization have been widely explored to improve existing designs or conceive new designs with better crashworthiness. Although many gradient-based and heuristic methods for topology- and topometry-based crashworthiness design are available these days, most of them result in stiff structures that are suitable only for a set of vehicle components in which maximizing the energy absorption or minimizing the intrusion is the main concern. However, there are some other components in a vehicle structure that should have characteristics of both stiffness and flexibility. Moreover, the load paths within the structure and potential buckle modes also play an important role in efficient functioning of such components. For example, the front bumper, side frame rails, steering column, and occupant protection devices like the knee bolster should all exhibit controlled deformation and collapse behavior. The primary objective of this research is to develop new methodologies to design crashworthy structures with controlled behavior. The well established Hybrid Cellular Automaton (HCA) method is used as the basic framework for the new methodologies, and compliant mechanism-type (sub)structures are the highlight of this research. The ability of compliant mechanisms to efficiently transfer force and/or motion from points of application of input loads to desired points within the structure is used to design solid and tubular components that exhibit controlled deformation and collapse behavior under crash loads. In addition, a new methodology for controlling the behavior of a structure under multiple crash load scenarios by adaptively changing the contributions from individual load cases is developed. Applied to practical design problems, the results demonstrate that the methodologies provide a practical tool to aid the design engineer in generating design concepts for crashworthy structures with controlled behavior. Although developed in the HCA framework, the basic ideas behind these methods are generic and can be easily implemented with other available topology- and topometry-based optimization methods.

  4. A Data Driven Pre-cooling Framework for Energy Cost Optimization in Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishwanath, Arun; Chandan, Vikas; Mendoza, Cameron

    Commercial buildings consume significant amount of energy. Facility managers are increasingly grappling with the problem of reducing their buildings’ peak power, overall energy consumption and energy bills. In this paper, we first develop an optimization framework – based on a gray box model for zone thermal dynamics – to determine a pre-cooling strategy that simultaneously shifts the peak power to low energy tariff regimes, and reduces both the peak power and overall energy consumption by exploiting the flexibility in a building’s thermal comfort range. We then evaluate the efficacy of the pre-cooling optimization framework by applying it to building managementmore » system data, spanning several days, obtained from a large commercial building located in a tropical region of the world. The results from simulations show that optimal pre-cooling reduces peak power by over 50%, energy consumption by up to 30% and energy bills by up to 37%. Next, to enable ease of use of our framework, we also propose a shortest path based heuristic algorithmfor solving the optimization problemand show that it has comparable erformance with the optimal solution. Finally, we describe an application of the proposed optimization framework for developing countries to reduce the dependency on expensive fossil fuels, which are often used as a source for energy backup.We conclude by highlighting our real world deployment of the optimal pre-cooling framework via a software service on the cloud platform of a major provider. Our pre-cooling methodology, based on the gray box optimization framework, incurs no capital expense and relies on data readily available from a building management system, thus enabling facility managers to take informed decisions for improving the energy and cost footprints of their buildings« less

  5. Near Optimal Event-Triggered Control of Nonlinear Discrete-Time Systems Using Neurodynamic Programming.

    PubMed

    Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani

    2016-09-01

    This paper presents an event-triggered near optimal control of uncertain nonlinear discrete-time systems. Event-driven neurodynamic programming (NDP) is utilized to design the control policy. A neural network (NN)-based identifier, with event-based state and input vectors, is utilized to learn the system dynamics. An actor-critic framework is used to learn the cost function and the optimal control input. The NN weights of the identifier, the critic, and the actor NNs are tuned aperiodically once every triggered instant. An adaptive event-trigger condition to decide the trigger instants is derived. Thus, a suitable number of events are generated to ensure a desired accuracy of approximation. A near optimal performance is achieved without using value and/or policy iterations. A detailed analysis of nontrivial inter-event times with an explicit formula to show the reduction in computation is also derived. The Lyapunov technique is used in conjunction with the event-trigger condition to guarantee the ultimate boundedness of the closed-loop system. The simulation results are included to verify the performance of the controller. The net result is the development of event-driven NDP.

  6. Assessment of Optimal Flexibility in Ensemble of Frequency Responsive Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kundu, Soumya; Hansen, Jacob; Lian, Jianming

    2018-04-19

    Potential of electrical loads in providing grid ancillary services is often limited due to the uncertainties associated with the load behavior. A knowledge of the expected uncertainties with a load control program would invariably yield to better informed control policies, opening up the possibility of extracting the maximal load control potential without affecting grid operations. In the context of frequency responsive load control, a probabilistic uncertainty analysis framework is presented to quantify the expected error between the target and actual load response, under uncertainties in the load dynamics. A closed-form expression of an optimal demand flexibility, minimizing the expected errormore » in actual and committed flexibility, is provided. Analytical results are validated through Monte Carlo simulations of ensembles of electric water heaters.« less

  7. Fully probabilistic control design in an adaptive critic framework.

    PubMed

    Herzallah, Randa; Kárný, Miroslav

    2011-12-01

    Optimal stochastic controller pushes the closed-loop behavior as close as possible to the desired one. The fully probabilistic design (FPD) uses probabilistic description of the desired closed loop and minimizes Kullback-Leibler divergence of the closed-loop description to the desired one. Practical exploitation of the fully probabilistic design control theory continues to be hindered by the computational complexities involved in numerically solving the associated stochastic dynamic programming problem; in particular, very hard multivariate integration and an approximate interpolation of the involved multivariate functions. This paper proposes a new fully probabilistic control algorithm that uses the adaptive critic methods to circumvent the need for explicitly evaluating the optimal value function, thereby dramatically reducing computational requirements. This is a main contribution of this paper. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Sustainable development of process facilities: state-of-the-art review of pollution prevention frameworks.

    PubMed

    Hossain, Khandoker A; Khan, Faisal I; Hawboldt, Kelly

    2008-01-15

    Pollution prevention (P2) strategy is receiving significant attention in industries all over the world, over end-of-pipe pollution control and management strategy. This paper is a review of the existing pollution prevention frameworks. The reviewed frameworks contributed significantly to bring the P2 approach into practice and gradually improved it towards a sustainable solution; nevertheless, some objectives are yet to be achieved. In this context, the paper has proposed a P2 framework 'IP2M' addressing the limitations for systematic implementation of the P2 program in industries at design as well as retrofit stages. The main features of the proposed framework are that, firstly, it has integrated cradle-to-gate life cycle assessment (LCA) tool with other adequate P2 opportunity analysis tools in P2 opportunity analysis phase and secondly, it has re-used the risk-based cradle-to-gate LCA during the environmental evaluation of different P2 options. Furthermore, in multi-objective optimization phase, it simultaneously considers the P2 options with available end-of-pipe control options in order to select the sustainable environmental management option.

  9. BlueSky Cloud Framework: An E-Learning Framework Embracing Cloud Computing

    NASA Astrophysics Data System (ADS)

    Dong, Bo; Zheng, Qinghua; Qiao, Mu; Shu, Jian; Yang, Jie

    Currently, E-Learning has grown into a widely accepted way of learning. With the huge growth of users, services, education contents and resources, E-Learning systems are facing challenges of optimizing resource allocations, dealing with dynamic concurrency demands, handling rapid storage growth requirements and cost controlling. In this paper, an E-Learning framework based on cloud computing is presented, namely BlueSky cloud framework. Particularly, the architecture and core components of BlueSky cloud framework are introduced. In BlueSky cloud framework, physical machines are virtualized, and allocated on demand for E-Learning systems. Moreover, BlueSky cloud framework combines with traditional middleware functions (such as load balancing and data caching) to serve for E-Learning systems as a general architecture. It delivers reliable, scalable and cost-efficient services to E-Learning systems, and E-Learning organizations can establish systems through these services in a simple way. BlueSky cloud framework solves the challenges faced by E-Learning, and improves the performance, availability and scalability of E-Learning systems.

  10. Design of a high altitude long endurance flying-wing solar-powered unmanned air vehicle

    NASA Astrophysics Data System (ADS)

    Alsahlani, A. A.; Johnston, L. J.; Atcliffe, P. A.

    2017-06-01

    The low-Reynolds number environment of high-altitude §ight places severe demands on the aerodynamic design and stability and control of a high altitude, long endurance (HALE) unmanned air vehicle (UAV). The aerodynamic efficiency of a §ying-wing configuration makes it an attractive design option for such an application and is investigated in the present work. The proposed configuration has a high-aspect ratio, swept-wing planform, the wing sweep being necessary to provide an adequate moment arm for outboard longitudinal and lateral control surfaces. A design optimization framework is developed under a MATLAB environment, combining aerodynamic, structural, and stability analysis. Low-order analysis tools are employed to facilitate efficient computations, which is important when there are multiple optimization loops for the various engineering analyses. In particular, a vortex-lattice method is used to compute the wing planform aerodynamics, coupled to a twodimensional (2D) panel method to derive aerofoil sectional characteristics. Integral boundary-layer methods are coupled to the panel method in order to predict §ow separation boundaries during the design iterations. A quasi-analytical method is adapted for application to flyingwing con¦gurations to predict the wing weight and a linear finite-beam element approach is used for structural analysis of the wing-box. Stability is a particular concern in the low-density environment of high-altitude flight for flying-wing aircraft and so provision of adequate directional stability and control power forms part of the optimization process. At present, a modified Genetic Algorithm is used in all of the optimization loops. Each of the low-order engineering analysis tools is validated using higher-order methods to provide con¦dence in the use of these computationally-efficient tools in the present design-optimization framework. This paper includes the results of employing the present optimization tools in the design of a HALE, flying-wing UAV to indicate that this is a viable design configuration option.

  11. The free-energy principle: a unified brain theory?

    PubMed

    Friston, Karl

    2010-02-01

    A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories - optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.

  12. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis,more » identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.« less

  13. Explicit optimization of plan quality measures in intensity-modulated radiation therapy treatment planning.

    PubMed

    Engberg, Lovisa; Forsgren, Anders; Eriksson, Kjell; Hårdemark, Björn

    2017-06-01

    To formulate convex planning objectives of treatment plan multicriteria optimization with explicit relationships to the dose-volume histogram (DVH) statistics used in plan quality evaluation. Conventional planning objectives are designed to minimize the violation of DVH statistics thresholds using penalty functions. Although successful in guiding the DVH curve towards these thresholds, conventional planning objectives offer limited control of the individual points on the DVH curve (doses-at-volume) used to evaluate plan quality. In this study, we abandon the usual penalty-function framework and propose planning objectives that more closely relate to DVH statistics. The proposed planning objectives are based on mean-tail-dose, resulting in convex optimization. We also demonstrate how to adapt a standard optimization method to the proposed formulation in order to obtain a substantial reduction in computational cost. We investigated the potential of the proposed planning objectives as tools for optimizing DVH statistics through juxtaposition with the conventional planning objectives on two patient cases. Sets of treatment plans with differently balanced planning objectives were generated using either the proposed or the conventional approach. Dominance in the sense of better distributed doses-at-volume was observed in plans optimized within the proposed framework. The initial computational study indicates that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives than using the conventional approach. © 2017 American Association of Physicists in Medicine.

  14. Optimal control of vancomycin-resistant enterococci using preventive care and treatment of infections.

    PubMed

    Lowden, Jonathan; Miller Neilan, Rachael; Yahdi, Mohammed

    2014-03-01

    The rising prevalence of vancomycin-resistant enterococci (VRE) is a major health problem in intensive care units (ICU) because of its association with increased mortality and high health care costs. We present a mathematical framework for determining cost-effective strategies for prevention and treatment of VRE in the ICU. A system of five ordinary differential equations describes the movement of ICU patients in and out of five VRE-related states. Two control variables representing the prevention and treatment of VRE are incorporated into the system. The basic reproductive number is derived and calculated for different levels of the two controls. An optimal control problem is formulated to minimize VRE-related deaths and costs associated with prevention and treatment controls over a finite time period. Numerical solutions illustrate optimal single and dual allocations of the controls for various cost values. Results show that preventive care has the greatest impact in reducing the basic reproductive number, while treatment of VRE infections has the most impact on reducing VRE-related deaths. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Efficient Parallel Video Processing Techniques on GPU: From Framework to Implementation

    PubMed Central

    Su, Huayou; Wen, Mei; Wu, Nan; Ren, Ju; Zhang, Chunyuan

    2014-01-01

    Through reorganizing the execution order and optimizing the data structure, we proposed an efficient parallel framework for H.264/AVC encoder based on massively parallel architecture. We implemented the proposed framework by CUDA on NVIDIA's GPU. Not only the compute intensive components of the H.264 encoder are parallelized but also the control intensive components are realized effectively, such as CAVLC and deblocking filter. In addition, we proposed serial optimization methods, including the multiresolution multiwindow for motion estimation, multilevel parallel strategy to enhance the parallelism of intracoding as much as possible, component-based parallel CAVLC, and direction-priority deblocking filter. More than 96% of workload of H.264 encoder is offloaded to GPU. Experimental results show that the parallel implementation outperforms the serial program by 20 times of speedup ratio and satisfies the requirement of the real-time HD encoding of 30 fps. The loss of PSNR is from 0.14 dB to 0.77 dB, when keeping the same bitrate. Through the analysis to the kernels, we found that speedup ratios of the compute intensive algorithms are proportional with the computation power of the GPU. However, the performance of the control intensive parts (CAVLC) is much related to the memory bandwidth, which gives an insight for new architecture design. PMID:24757432

  16. Hierarchical multistage MCMC follow-up of continuous gravitational wave candidates

    NASA Astrophysics Data System (ADS)

    Ashton, G.; Prix, R.

    2018-05-01

    Leveraging Markov chain Monte Carlo optimization of the F statistic, we introduce a method for the hierarchical follow-up of continuous gravitational wave candidates identified by wide-parameter space semicoherent searches. We demonstrate parameter estimation for continuous wave sources and develop a framework and tools to understand and control the effective size of the parameter space, critical to the success of the method. Monte Carlo tests of simulated signals in noise demonstrate that this method is close to the theoretical optimal performance.

  17. Automated Design Framework for Synthetic Biology Exploiting Pareto Optimality.

    PubMed

    Otero-Muras, Irene; Banga, Julio R

    2017-07-21

    In this work we consider Pareto optimality for automated design in synthetic biology. We present a generalized framework based on a mixed-integer dynamic optimization formulation that, given design specifications, allows the computation of Pareto optimal sets of designs, that is, the set of best trade-offs for the metrics of interest. We show how this framework can be used for (i) forward design, that is, finding the Pareto optimal set of synthetic designs for implementation, and (ii) reverse design, that is, analyzing and inferring motifs and/or design principles of gene regulatory networks from the Pareto set of optimal circuits. Finally, we illustrate the capabilities and performance of this framework considering four case studies. In the first problem we consider the forward design of an oscillator. In the remaining problems, we illustrate how to apply the reverse design approach to find motifs for stripe formation, rapid adaption, and fold-change detection, respectively.

  18. Decoupling Coupled Constraints Through Utility Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, N; Marden, JR

    2014-08-01

    Several multiagent systems exemplify the need for establishing distributed control laws that ensure the resulting agents' collective behavior satisfies a given coupled constraint. This technical note focuses on the design of such control laws through a game-theoretic framework. In particular, this technical note provides two systematic methodologies for the design of local agent objective functions that guarantee all resulting Nash equilibria optimize the system level objective while also satisfying a given coupled constraint. Furthermore, the designed local agent objective functions fit into the framework of state based potential games. Consequently, one can appeal to existing results in game-theoretic learning tomore » derive a distributed process that guarantees the agents will reach such an equilibrium.« less

  19. Adaptive GSA-based optimal tuning of PI controlled servo systems with reduced process parametric sensitivity, robust stability and controller robustness.

    PubMed

    Precup, Radu-Emil; David, Radu-Codrut; Petriu, Emil M; Radac, Mircea-Bogdan; Preitl, Stefan

    2014-11-01

    This paper suggests a new generation of optimal PI controllers for a class of servo systems characterized by saturation and dead zone static nonlinearities and second-order models with an integral component. The objective functions are expressed as the integral of time multiplied by absolute error plus the weighted sum of the integrals of output sensitivity functions of the state sensitivity models with respect to two process parametric variations. The PI controller tuning conditions applied to a simplified linear process model involve a single design parameter specific to the extended symmetrical optimum (ESO) method which offers the desired tradeoff to several control system performance indices. An original back-calculation and tracking anti-windup scheme is proposed in order to prevent the integrator wind-up and to compensate for the dead zone nonlinearity of the process. The minimization of the objective functions is carried out in the framework of optimization problems with inequality constraints which guarantee the robust stability with respect to the process parametric variations and the controller robustness. An adaptive gravitational search algorithm (GSA) solves the optimization problems focused on the optimal tuning of the design parameter specific to the ESO method and of the anti-windup tracking gain. A tuning method for PI controllers is proposed as an efficient approach to the design of resilient control systems. The tuning method and the PI controllers are experimentally validated by the adaptive GSA-based tuning of PI controllers for the angular position control of a laboratory servo system.

  20. A LAGRANGIAN GAUSS-NEWTON-KRYLOV SOLVER FOR MASS- AND INTENSITY-PRESERVING DIFFEOMORPHIC IMAGE REGISTRATION.

    PubMed

    Mang, Andreas; Ruthotto, Lars

    2017-01-01

    We present an efficient solver for diffeomorphic image registration problems in the framework of Large Deformations Diffeomorphic Metric Mappings (LDDMM). We use an optimal control formulation, in which the velocity field of a hyperbolic PDE needs to be found such that the distance between the final state of the system (the transformed/transported template image) and the observation (the reference image) is minimized. Our solver supports both stationary and non-stationary (i.e., transient or time-dependent) velocity fields. As transformation models, we consider both the transport equation (assuming intensities are preserved during the deformation) and the continuity equation (assuming mass-preservation). We consider the reduced form of the optimal control problem and solve the resulting unconstrained optimization problem using a discretize-then-optimize approach. A key contribution is the elimination of the PDE constraint using a Lagrangian hyperbolic PDE solver. Lagrangian methods rely on the concept of characteristic curves. We approximate these curves using a fourth-order Runge-Kutta method. We also present an efficient algorithm for computing the derivatives of the final state of the system with respect to the velocity field. This allows us to use fast Gauss-Newton based methods. We present quickly converging iterative linear solvers using spectral preconditioners that render the overall optimization efficient and scalable. Our method is embedded into the image registration framework FAIR and, thus, supports the most commonly used similarity measures and regularization functionals. We demonstrate the potential of our new approach using several synthetic and real world test problems with up to 14.7 million degrees of freedom.

  1. Assessing Concentrations and Health Impacts of Air Quality Management Strategies: Framework for Rapid Emissions Scenario and Health impact ESTimation (FRESH-EST)

    PubMed Central

    Milando, Chad W.; Martenies, Sheena E.; Batterman, Stuart A.

    2017-01-01

    In air quality management, reducing emissions from pollutant sources often forms the primary response to attaining air quality standards and guidelines. Despite the broad success of air quality management in the US, challenges remain. As examples: allocating emissions reductions among multiple sources is complex and can require many rounds of negotiation; health impacts associated with emissions, the ultimate driver for the standards, are not explicitly assessed; and long dispersion model run-times, which result from the increasing size and complexity of model inputs, limit the number of scenarios that can be evaluated, thus increasing the likelihood of missing an optimal strategy. A new modeling framework, called the "Framework for Rapid Emissions Scenario and Health impact ESTimation" (FRESH-EST), is presented to respond to these challenges. FRESH-EST estimates concentrations and health impacts of alternative emissions scenarios at the urban scale, providing efficient computations from emissions to health impacts at the Census block or other desired spatial scale. In addition, FRESH-EST can optimize emission reductions to meet specified environmental and health constraints, and a convenient user interface and graphical displays are provided to facilitate scenario evaluation. The new framework is demonstrated in an SO2 non-attainment area in southeast Michigan with two optimization strategies: the first minimizes emission reductions needed to achieve a target concentration; the second minimizes concentrations while holding constant the cumulative emissions across local sources (e.g., an emissions floor). The optimized strategies match outcomes in the proposed SO2 State Implementation Plan without the proposed stack parameter modifications or shutdowns. In addition, the lower health impacts estimated for these strategies suggest the potential for FRESH-EST to identify pollution control alternatives for air quality management planning. PMID:27318620

  2. A Framework for the Optimization of Discrete-Event Simulation Models

    NASA Technical Reports Server (NTRS)

    Joshi, B. D.; Unal, R.; White, N. H.; Morris, W. D.

    1996-01-01

    With the growing use of computer modeling and simulation, in all aspects of engineering, the scope of traditional optimization has to be extended to include simulation models. Some unique aspects have to be addressed while optimizing via stochastic simulation models. The optimization procedure has to explicitly account for the randomness inherent in the stochastic measures predicted by the model. This paper outlines a general purpose framework for optimization of terminating discrete-event simulation models. The methodology combines a chance constraint approach for problem formulation, together with standard statistical estimation and analyses techniques. The applicability of the optimization framework is illustrated by minimizing the operation and support resources of a launch vehicle, through a simulation model.

  3. The Business Change Initiative: A Novel Approach to Improved Cost and Schedule Management

    NASA Technical Reports Server (NTRS)

    Shinn, Stephen A.; Bryson, Jonathan; Klein, Gerald; Lunz-Ruark, Val; Majerowicz, Walt; McKeever, J.; Nair, Param

    2016-01-01

    Goddard Space Flight Center's Flight Projects Directorate employed a Business Change Initiative (BCI) to infuse a series of activities coordinated to drive improved cost and schedule performance across Goddard's missions. This sustaining change framework provides a platform to manage and implement cost and schedule control techniques throughout the project portfolio. The BCI concluded in December 2014, deploying over 100 cost and schedule management changes including best practices, tools, methods, training, and knowledge sharing. The new business approach has driven the portfolio to improved programmatic performance. The last eight launched GSFC missions have optimized cost, schedule, and technical performance on a sustained basis to deliver on time and within budget, returning funds in many cases. While not every future mission will boast such strong performance, improved cost and schedule tools, management practices, and ongoing comprehensive evaluations of program planning and control methods to refine and implement best practices will continue to provide a framework for sustained performance. This paper will describe the tools, techniques, and processes developed during the BCI and the utilization of collaborative content management tools to disseminate project planning and control techniques to ensure continuous collaboration and optimization of cost and schedule management in the future.

  4. Teaching and Learning Numerical Analysis and Optimization: A Didactic Framework and Applications of Inquiry-Based Learning

    ERIC Educational Resources Information Center

    Lappas, Pantelis Z.; Kritikos, Manolis N.

    2018-01-01

    The main objective of this paper is to propose a didactic framework for teaching Applied Mathematics in higher education. After describing the structure of the framework, several applications of inquiry-based learning in teaching numerical analysis and optimization are provided to illustrate the potential of the proposed framework. The framework…

  5. An approximation theory for nonlinear partial differential equations with applications to identification and control

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Kunisch, K.

    1982-01-01

    Approximation results from linear semigroup theory are used to develop a general framework for convergence of approximation schemes in parameter estimation and optimal control problems for nonlinear partial differential equations. These ideas are used to establish theoretical convergence results for parameter identification using modal (eigenfunction) approximation techniques. Results from numerical investigations of these schemes for both hyperbolic and parabolic systems are given.

  6. Probabilistic models in human sensorimotor control

    PubMed Central

    Wolpert, Daniel M.

    2009-01-01

    Sensory and motor uncertainty form a fundamental constraint on human sensorimotor control. Bayesian decision theory (BDT) has emerged as a unifying framework to understand how the central nervous system performs optimal estimation and control in the face of such uncertainty. BDT has two components: Bayesian statistics and decision theory. Here we review Bayesian statistics and show how it applies to estimating the state of the world and our own body. Recent results suggest that when learning novel tasks we are able to learn the statistical properties of both the world and our own sensory apparatus so as to perform estimation using Bayesian statistics. We review studies which suggest that humans can combine multiple sources of information to form maximum likelihood estimates, can incorporate prior beliefs about possible states of the world so as to generate maximum a posteriori estimates and can use Kalman filter-based processes to estimate time-varying states. Finally, we review Bayesian decision theory in motor control and how the central nervous system processes errors to determine loss functions and optimal actions. We review results that suggest we plan movements based on statistics of our actions that result from signal-dependent noise on our motor outputs. Taken together these studies provide a statistical framework for how the motor system performs in the presence of uncertainty. PMID:17628731

  7. Enablers and barriers for women with gestational diabetes mellitus to achieve optimal glycaemic control - a qualitative study using the theoretical domains framework.

    PubMed

    Martis, Ruth; Brown, Julie; McAra-Couper, Judith; Crowther, Caroline A

    2018-04-11

    Glycaemic target recommendations vary widely between international professional organisations for women with gestational diabetes mellitus (GDM). Some studies have reported women's experiences of having GDM, but little is known how this relates to their glycaemic targets. The aim of this study was to identify enablers and barriers for women with GDM to achieve optimal glycaemic control. Women with GDM were recruited from two large, geographically different, hospitals in New Zealand to participate in a semi-structured interview to explore their views and experiences focusing on enablers and barriers to achieving optimal glycaemic control. Final thematic analysis was performed using the Theoretical Domains Framework. Sixty women participated in the study. Women reported a shift from their initial negative response to accepting their diagnosis but disliked the constant focus on numbers. Enablers and barriers were categorised into ten domains across the three study questions. Enablers included: the ability to attend group teaching sessions with family and hear from women who have had GDM; easy access to a diabetes dietitian with diet recommendations tailored to a woman's context including ethnic food and financial considerations; free capillary blood glucose (CBG) monitoring equipment, health shuttles to take women to appointments; child care when attending clinic appointments; and being taught CBG testing by a community pharmacist. Barriers included: lack of health information, teaching sessions, consultations, and food diaries in a woman's first language; long waiting times at clinic appointments; seeing a different health professional every clinic visit; inconsistent advice; no tailored physical activities assessments; not knowing where to access appropriate information on the internet; unsupportive partners, families, and workplaces; and unavailability of social media or support groups for women with GDM. Perceived judgement by others led some women only to share their GDM diagnosis with their partners. This created social isolation. Women with GDM report multiple enablers and barriers to achieving optimal glycaemic control. The findings of this study may assist health professionals and diabetes in pregnancy services to improve their care for women with GDM and support them to achieve optimal glycaemic control.

  8. Neural Meta-Memes Framework for Combinatorial Optimization

    NASA Astrophysics Data System (ADS)

    Song, Li Qin; Lim, Meng Hiot; Ong, Yew Soon

    In this paper, we present a Neural Meta-Memes Framework (NMMF) for combinatorial optimization. NMMF is a framework which models basic optimization algorithms as memes and manages them dynamically when solving combinatorial problems. NMMF encompasses neural networks which serve as the overall planner/coordinator to balance the workload between memes. We show the efficacy of the proposed NMMF through empirical study on a class of combinatorial problem, the quadratic assignment problem (QAP).

  9. A Web-Based Monitoring System for Multidisciplinary Design Projects

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Salas, Andrea O.; Weston, Robert P.

    1998-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary computational environments, is defined as a hardware and software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, displaying, monitoring, and controlling the design process. The objective of this research is to explore how Web technology, integrated with an existing framework, can improve these areas of weakness. This paper describes a Web-based system that optimizes and controls the execution sequence of design processes; and monitors the project status and results. The three-stage evolution of the system with increasingly complex problems demonstrates the feasibility of this approach.

  10. Look on the bright side: do the benefits of optimism depend on the social nature of the stressor?

    PubMed

    Terrill, Alexandra L; Ruiz, John M; Garofalo, John P

    2010-10-01

    Growing evidence suggests that a number of personality traits associated with physical disease risk tend to be social in nature and selectively responsive to social as opposed to non-social stimuli. The current aim was to examine dispositional optimism within this framework. In Study 1, optimism was projected into the Interpersonal Circumplex and Five Factor Model revealing significant interpersonal representation characterized by high control and affiliation. Study 2 demonstrated that higher dispositional optimism attenuated cardiovascular responses to a social (speech) but not non-social stressor (cold pressor) task. Optimism-related attenuation of reactivity to the social vs. non-social stressor contributes further evidence to an emerging picture of psychosocial risk as largely reflecting person x social environment interactions.

  11. Optimal Fault-Tolerant Control for Discrete-Time Nonlinear Strict-Feedback Systems Based on Adaptive Critic Design.

    PubMed

    Wang, Zhanshan; Liu, Lei; Wu, Yanming; Zhang, Huaguang

    2018-06-01

    This paper investigates the problem of optimal fault-tolerant control (FTC) for a class of unknown nonlinear discrete-time systems with actuator fault in the framework of adaptive critic design (ACD). A pivotal highlight is the adaptive auxiliary signal of the actuator fault, which is designed to offset the effect of the fault. The considered systems are in strict-feedback forms and involve unknown nonlinear functions, which will result in the causal problem. To solve this problem, the original nonlinear systems are transformed into a novel system by employing the diffeomorphism theory. Besides, the action neural networks (ANNs) are utilized to approximate a predefined unknown function in the backstepping design procedure. Combined the strategic utility function and the ACD technique, a reinforcement learning algorithm is proposed to set up an optimal FTC, in which the critic neural networks (CNNs) provide an approximate structure of the cost function. In this case, it not only guarantees the stability of the systems, but also achieves the optimal control performance as well. In the end, two simulation examples are used to show the effectiveness of the proposed optimal FTC strategy.

  12. Coupled attitude-orbit dynamics and control for an electric sail in a heliocentric transfer mission.

    PubMed

    Huo, Mingying; Zhao, Jun; Xie, Shaobiao; Qi, Naiming

    2015-01-01

    The paper discusses the coupled attitude-orbit dynamics and control of an electric-sail-based spacecraft in a heliocentric transfer mission. The mathematical model characterizing the propulsive thrust is first described as a function of the orbital radius and the sail angle. Since the solar wind dynamic pressure acceleration is induced by the sail attitude, the orbital and attitude dynamics of electric sails are coupled, and are discussed together. Based on the coupled equations, the flight control is investigated, wherein the orbital control is studied in an optimal framework via a hybrid optimization method and the attitude controller is designed based on feedback linearization control. To verify the effectiveness of the proposed control strategy, a transfer problem from Earth to Mars is considered. The numerical results show that the proposed strategy can control the coupled system very well, and a small control torque can control both the attitude and orbit. The study in this paper will contribute to the theory study and application of electric sail.

  13. Coupled Attitude-Orbit Dynamics and Control for an Electric Sail in a Heliocentric Transfer Mission

    PubMed Central

    Huo, Mingying; Zhao, Jun; Xie, Shaobiao; Qi, Naiming

    2015-01-01

    The paper discusses the coupled attitude-orbit dynamics and control of an electric-sail-based spacecraft in a heliocentric transfer mission. The mathematical model characterizing the propulsive thrust is first described as a function of the orbital radius and the sail angle. Since the solar wind dynamic pressure acceleration is induced by the sail attitude, the orbital and attitude dynamics of electric sails are coupled, and are discussed together. Based on the coupled equations, the flight control is investigated, wherein the orbital control is studied in an optimal framework via a hybrid optimization method and the attitude controller is designed based on feedback linearization control. To verify the effectiveness of the proposed control strategy, a transfer problem from Earth to Mars is considered. The numerical results show that the proposed strategy can control the coupled system very well, and a small control torque can control both the attitude and orbit. The study in this paper will contribute to the theory study and application of electric sail. PMID:25950179

  14. Optimal protocols for slowly driven quantum systems.

    PubMed

    Zulkowski, Patrick R; DeWeese, Michael R

    2015-09-01

    The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.

  15. Optimal integration of a hybrid solar-battery power source into smart home nanogrid with plug-in electric vehicle

    NASA Astrophysics Data System (ADS)

    Wu, Xiaohua; Hu, Xiaosong; Teng, Yanqiong; Qian, Shide; Cheng, Rui

    2017-09-01

    Hybrid solar-battery power source is essential in the nexus of plug-in electric vehicle (PEV), renewables, and smart building. This paper devises an optimization framework for efficient energy management and components sizing of a single smart home with home battery, PEV, and potovoltatic (PV) arrays. We seek to maximize the home economy, while satisfying home power demand and PEV driving. Based on the structure and system models of the smart home nanogrid, a convex programming (CP) problem is formulated to rapidly and efficiently optimize both the control decision and parameters of the home battery energy storage system (BESS). Considering different time horizons of optimization, home BESS prices, types and control modes of PEVs, the parameters of home BESS and electric cost are systematically investigated. Based on the developed CP control law in home to vehicle (H2V) mode and vehicle to home (V2H) mode, the home with BESS does not buy electric energy from the grid during the electric price's peak periods.

  16. Multidisciplinary Optimization Branch Experience Using iSIGHT Software

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Korte, J. J.; Dunn, H. J.; Salas, A. O.

    1999-01-01

    The Multidisciplinary Optimization (MDO) Branch at NASA Langley Research Center is investigating frameworks for supporting multidisciplinary analysis and optimization research. An optimization framework call improve the design process while reducing time and costs. A framework provides software and system services to integrate computational tasks and allows the researcher to concentrate more on the application and less on the programming details. A framework also provides a common working environment and a full range of optimization tools, and so increases the productivity of multidisciplinary research teams. Finally, a framework enables staff members to develop applications for use by disciplinary experts in other organizations. Since the release of version 4.0, the MDO Branch has gained experience with the iSIGHT framework developed by Engineous Software, Inc. This paper describes experiences with four aerospace applications: (1) reusable launch vehicle sizing, (2) aerospike nozzle design, (3) low-noise rotorcraft trajectories, and (4) acoustic liner design. All applications have been successfully tested using the iSIGHT framework, except for the aerospike nozzle problem, which is in progress. Brief overviews of each problem are provided. The problem descriptions include the number and type of disciplinary codes, as well as all estimate of the multidisciplinary analysis execution time. In addition, the optimization methods, objective functions, design variables, and design constraints are described for each problem. Discussions on the experience gained and lessons learned are provided for each problem. These discussions include the advantages and disadvantages of using the iSIGHT framework for each case as well as the ease of use of various advanced features. Potential areas of improvement are identified.

  17. Multiobjective optimization for Groundwater Nitrate Pollution Control. Application to El Salobral-Los Llanos aquifer (Spain).

    NASA Astrophysics Data System (ADS)

    Llopis-Albert, C.; Peña-Haro, S.; Pulido-Velazquez, M.; Molina, J.

    2012-04-01

    Water quality management is complex due to the inter-relations between socio-political, environmental and economic constraints and objectives. In order to choose an appropriate policy to reduce nitrate pollution in groundwater it is necessary to consider different objectives, often in conflict. In this paper, a hydro-economic modeling framework, based on a non-linear optimization(CONOPT) technique, which embeds simulation of groundwater mass transport through concentration response matrices, is used to study optimal policies for groundwater nitrate pollution control under different objectives and constraints. Three objectives were considered: recovery time (for meeting the environmental standards, as required by the EU Water Framework Directive and Groundwater Directive), maximum nitrate concentration in groundwater, and net benefits in agriculture. Another criterion was added: the reliability of meeting the nitrate concentration standards. The approach allows deriving the trade-offs between the reliability of meeting the standard, the net benefits from agricultural production and the recovery time. Two different policies were considered: spatially distributed fertilizer standards or quotas (obtained through multi-objective optimization) and fertilizer prices. The multi-objective analysis allows to compare the achievement of the different policies, Pareto fronts (or efficiency frontiers) and tradeoffs for the set of mutually conflicting objectives. The constraint method is applied to generate the set of non-dominated solutions. The multi-objective framework can be used to design groundwater management policies taking into consideration different stakeholders' interests (e.g., policy makers, agricultures or environmental groups). The methodology was applied to the El Salobral-Los Llanos aquifer in Spain. Over the past 30 years the area has undertaken a significant socioeconomic development, mainly due to the intensive groundwater use for irrigated crops, which has provoked a steady decline of groundwater levels as well as high nitrate concentrations at certain locations (above 50 mg/l.). The results showed the usefulness of this multi-objective hydro-economic approach for designing sustainable nitrate pollution control policies (as fertilizer quotas or efficient fertilizer pricing policies) with insight into the economic cost of satisfying the environmental constraints and the tradeoffs with different time horizons.

  18. Optimal initiation of electronic excited state mediated intramolecular H-transfer in malonaldehyde by UV-laser pulses

    NASA Astrophysics Data System (ADS)

    Nandipati, K. R.; Singh, H.; Nagaprasad Reddy, S.; Kumar, K. A.; Mahapatra, S.

    2014-12-01

    Optimally controlled initiation of intramolecular H-transfer in malonaldehyde is accomplished by designing a sequence of ultrashort (~80 fs) down-chirped pump-dump ultra violet (UV)-laser pulses through an optically bright electronic excited [ S 2 ( π π ∗)] state as a mediator. The sequence of such laser pulses is theoretically synthesized within the framework of optimal control theory (OCT) and employing the well-known pump-dump scheme of Tannor and Rice [D.J. Tannor, S.A. Rice, J. Chem. Phys. 83, 5013 (1985)]. In the OCT, the control task is framed as the maximization of cost functional defined in terms of an objective function along with the constraints on the field intensity and system dynamics. The latter is monitored by solving the time-dependent Schrödinger equation. The initial guess, laser driven dynamics and the optimized pulse structure (i.e., the spectral content and temporal profile) followed by associated mechanism involved in fulfilling the control task are examined in detail and discussed. A comparative account of the dynamical outcomes within the Condon approximation for the transition dipole moment versus its more realistic value calculated ab initio is also presented.

  19. Frameworks and Tools for High-Confidence Design of Adaptive, Distributed Embedded Control Systems. Multi-University Research Initiative on High-Confidence Design for Distributed Embedded Systems

    DTIC Science & Technology

    2009-01-01

    controllers (currently using the Robostix+Gumstix pair ). The interface between the plant simulator and the controller is ‘hard real-time’, and the xPC box... simulation ) on aerobatic maneuver design for the STARMAC quadrotor helicopter testbed. In related work, we have developed a new optimization scheme...for scheduling hybrid systems, and have demonstrated the results on an autonomous car simulation testbed. We are focusing efforts this summer for

  20. A Systematic Approach for Quantitative Analysis of Multidisciplinary Design Optimization Framework

    NASA Astrophysics Data System (ADS)

    Kim, Sangho; Park, Jungkeun; Lee, Jeong-Oog; Lee, Jae-Woo

    An efficient Multidisciplinary Design and Optimization (MDO) framework for an aerospace engineering system should use and integrate distributed resources such as various analysis codes, optimization codes, Computer Aided Design (CAD) tools, Data Base Management Systems (DBMS), etc. in a heterogeneous environment, and need to provide user-friendly graphical user interfaces. In this paper, we propose a systematic approach for determining a reference MDO framework and for evaluating MDO frameworks. The proposed approach incorporates two well-known methods, Analytic Hierarchy Process (AHP) and Quality Function Deployment (QFD), in order to provide a quantitative analysis of the qualitative criteria of MDO frameworks. Identification and hierarchy of the framework requirements and the corresponding solutions for the reference MDO frameworks, the general one and the aircraft oriented one were carefully investigated. The reference frameworks were also quantitatively identified using AHP and QFD. An assessment of three in-house frameworks was then performed. The results produced clear and useful guidelines for improvement of the in-house MDO frameworks and showed the feasibility of the proposed approach for evaluating an MDO framework without a human interference.

  1. Multiple target sound quality balance for hybrid electric powertrain noise

    NASA Astrophysics Data System (ADS)

    Mosquera-Sánchez, J. A.; Sarrazin, M.; Janssens, K.; de Oliveira, L. P. R.; Desmet, W.

    2018-01-01

    The integration of the electric motor to the powertrain in hybrid electric vehicles (HEVs) presents acoustic stimuli that elicit new perceptions. The large number of spectral components, as well as the wider bandwidth of this sort of noises, pose new challenges to current noise, vibration and harshness (NVH) approaches. This paper presents a framework for enhancing the sound quality (SQ) of the hybrid electric powertrain noise perceived inside the passenger compartment. Compared with current active sound quality control (ASQC) schemes, where the SQ improvement is just an effect of the control actions, the proposed technique features an optimization stage, which enables the NVH specialist to actively implement the amplitude balance of the tones that better fits into the auditory expectations. Since Loudness, Roughness, Sharpness and Tonality are the most relevant SQ metrics for interior HEV noise, they are used as performance metrics in the concurrent optimization analysis, which, eventually, drives the control design method. Thus, multichannel active sound profiling systems that feature cross-channel compensation schemes are guided by the multi-objective optimization stage, by means of optimal sets of amplitude gain factors that can be implemented at each single sensor location, while minimizing cross-channel effects that can either degrade the original SQ condition, or even hinder the implementation of independent SQ targets. The proposed framework is verified experimentally, with realistic stationary hybrid electric powertrain noise, showing SQ enhancement for multiple locations within a scaled vehicle mock-up. The results show total success rates in excess of 90%, which indicate that the proposed method is promising, not only for the improvement of the SQ of HEV noise, but also for a variety of periodic disturbances with similar features.

  2. Event-Triggered Adaptive Dynamic Programming for Continuous-Time Systems With Control Constraints.

    PubMed

    Dong, Lu; Zhong, Xiangnan; Sun, Changyin; He, Haibo

    2016-08-31

    In this paper, an event-triggered near optimal control structure is developed for nonlinear continuous-time systems with control constraints. Due to the saturating actuators, a nonquadratic cost function is introduced and the Hamilton-Jacobi-Bellman (HJB) equation for constrained nonlinear continuous-time systems is formulated. In order to solve the HJB equation, an actor-critic framework is presented. The critic network is used to approximate the cost function and the action network is used to estimate the optimal control law. In addition, in the proposed method, the control signal is transmitted in an aperiodic manner to reduce the computational and the transmission cost. Both the networks are only updated at the trigger instants decided by the event-triggered condition. Detailed Lyapunov analysis is provided to guarantee that the closed-loop event-triggered system is ultimately bounded. Three case studies are used to demonstrate the effectiveness of the proposed method.

  3. Scheduling algorithms for rapid imaging using agile Cubesat constellations

    NASA Astrophysics Data System (ADS)

    Nag, Sreeja; Li, Alan S.; Merrick, James H.

    2018-02-01

    Distributed Space Missions such as formation flight and constellations, are being recognized as important Earth Observation solutions to increase measurement samples over space and time. Cubesats are increasing in size (27U, ∼40 kg in development) with increasing capabilities to host imager payloads. Given the precise attitude control systems emerging in the commercial market, Cubesats now have the ability to slew and capture images within short notice. We propose a modular framework that combines orbital mechanics, attitude control and scheduling optimization to plan the time-varying, full-body orientation of agile Cubesats in a constellation such that they maximize the number of observed images and observation time, within the constraints of Cubesat hardware specifications. The attitude control strategy combines bang-bang and PD control, with constraints such as power consumption, response time, and stability factored into the optimality computations and a possible extension to PID control to account for disturbances. Schedule optimization is performed using dynamic programming with two levels of heuristics, verified and improved upon using mixed integer linear programming. The automated scheduler is expected to run on ground station resources and the resultant schedules uplinked to the satellites for execution, however it can be adapted for onboard scheduling, contingent on Cubesat hardware and software upgrades. The framework is generalizable over small steerable spacecraft, sensor specifications, imaging objectives and regions of interest, and is demonstrated using multiple 20 kg satellites in Low Earth Orbit for two case studies - rapid imaging of Landsat's land and coastal images and extended imaging of global, warm water coral reefs. The proposed algorithm captures up to 161% more Landsat images than nadir-pointing sensors with the same field of view, on a 2-satellite constellation over a 12-h simulation. Integer programming was able to verify that optimality of the dynamic programming solution for single satellites was within 10%, and find up to 5% more optimal solutions. The optimality gap for constellations was found to be 22% at worst, but the dynamic programming schedules were found at nearly four orders of magnitude better computational speed than integer programming. The algorithm can include cloud cover predictions, ground downlink windows or any other spatial, temporal or angular constraints into the orbital module and be integrated into planning tools for agile constellations.

  4. Parameter optimization of a hydrologic model in a snow-dominated basin using a modular Python framework

    NASA Astrophysics Data System (ADS)

    Volk, J. M.; Turner, M. A.; Huntington, J. L.; Gardner, M.; Tyler, S.; Sheneman, L.

    2016-12-01

    Many distributed models that simulate watershed hydrologic processes require a collection of multi-dimensional parameters as input, some of which need to be calibrated before the model can be applied. The Precipitation Runoff Modeling System (PRMS) is a physically-based and spatially distributed hydrologic model that contains a considerable number of parameters that often need to be calibrated. Modelers can also benefit from uncertainty analysis of these parameters. To meet these needs, we developed a modular framework in Python to conduct PRMS parameter optimization, uncertainty analysis, interactive visual inspection of parameters and outputs, and other common modeling tasks. Here we present results for multi-step calibration of sensitive parameters controlling solar radiation, potential evapo-transpiration, and streamflow in a PRMS model that we applied to the snow-dominated Dry Creek watershed in Idaho. We also demonstrate how our modular approach enables the user to use a variety of parameter optimization and uncertainty methods or easily define their own, such as Monte Carlo random sampling, uniform sampling, or even optimization methods such as the downhill simplex method or its commonly used, more robust counterpart, shuffled complex evolution.

  5. Optimal digital dynamical decoupling for general decoherence via Walsh modulation

    NASA Astrophysics Data System (ADS)

    Qi, Haoyu; Dowling, Jonathan P.; Viola, Lorenza

    2017-11-01

    We provide a general framework for constructing digital dynamical decoupling sequences based on Walsh modulation—applicable to arbitrary qubit decoherence scenarios. By establishing equivalence between decoupling design based on Walsh functions and on concatenated projections, we identify a family of optimal Walsh sequences, which can be exponentially more efficient, in terms of the required total pulse number, for fixed cancellation order, than known digital sequences based on concatenated design. Optimal sequences for a given cancellation order are highly non-unique—their performance depending sensitively on the control path. We provide an analytic upper bound to the achievable decoupling error and show how sequences within the optimal Walsh family can substantially outperform concatenated decoupling in principle, while respecting realistic timing constraints.

  6. Multidisciplinary Optimization Branch Experience Using iSIGHT Software

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Korte, J. J.; Dunn, H. J.; Salas, A. O.

    1999-01-01

    The Multidisciplinary Optimization (MDO) Branch at NASA Langley is investigating frameworks for supporting multidisciplinary analysis and optimization research. A framework provides software and system services to integrate computational tasks and allows the researcher to concentrate more on the application and less on the programming details. A framework also provides a common working environment and a full range of optimization tools, and so increases the productivity of multidisciplinary research teams. Finally, a framework enables staff members to develop applications for use by disciplinary experts in other organizations. This year, the MDO Branch has gained experience with the iSIGHT framework. This paper describes experiences with four aerospace applications, including: (1) reusable launch vehicle sizing, (2) aerospike nozzle design, (3) low-noise rotorcraft trajectories, and (4) acoustic liner design. Brief overviews of each problem are provided, including the number and type of disciplinary codes and computation time estimates. In addition, the optimization methods, objective functions, design variables, and constraints are described for each problem. For each case, discussions on the advantages and disadvantages of using the iSIGHT framework are provided as well as notes on the ease of use of various advanced features and suggestions for areas of improvement.

  7. Trends in Process Analytical Technology: Present State in Bioprocessing.

    PubMed

    Jenzsch, Marco; Bell, Christian; Buziol, Stefan; Kepert, Felix; Wegele, Harald; Hakemeyer, Christian

    2017-08-04

    Process analytical technology (PAT), the regulatory initiative for incorporating quality in pharmaceutical manufacturing, is an area of intense research and interest. If PAT is effectively applied to bioprocesses, this can increase process understanding and control, and mitigate the risk from substandard drug products to both manufacturer and patient. To optimize the benefits of PAT, the entire PAT framework must be considered and each elements of PAT must be carefully selected, including sensor and analytical technology, data analysis techniques, control strategies and algorithms, and process optimization routines. This chapter discusses the current state of PAT in the biopharmaceutical industry, including several case studies demonstrating the degree of maturity of various PAT tools. Graphical Abstract Hierarchy of QbD components.

  8. Linear quadratic tracking problems in Hilbert space - Application to optimal active noise suppression

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.

    1989-01-01

    A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.

  9. Towards a hierarchical optimization framework for spatially targeting incentive policies to promote green infrastructure amidst multiple objectives and uncertainty

    EPA Science Inventory

    We introduce a hierarchical optimization framework for spatially targeting green infrastructure (GI) incentive policies in order to meet objectives related to cost and environmental effectiveness. The framework explicitly simulates the interaction between multiple levels of polic...

  10. An ounce of prevention or a pound of cure: bioeconomic risk analysis of invasive species.

    PubMed

    Leung, Brian; Lodge, David M; Finnoff, David; Shogren, Jason F; Lewis, Mark A; Lamberti, Gary

    2002-12-07

    Numbers of non-indigenous species--species introduced from elsewhere - are increasing rapidly worldwide, causing both environmental and economic damage. Rigorous quantitative risk-analysis frameworks, however, for invasive species are lacking. We need to evaluate the risks posed by invasive species and quantify the relative merits of different management strategies (e.g. allocation of resources between prevention and control). We present a quantitative bioeconomic modelling framework to analyse risks from non-indigenous species to economic activity and the environment. The model identifies the optimal allocation of resources to prevention versus control, acceptable invasion risks and consequences of invasion to optimal investments (e.g. labour and capital). We apply the model to zebra mussels (Dreissena polymorpha), and show that society could benefit by spending up to US$324 000 year(-1) to prevent invasions into a single lake with a power plant. By contrast, the US Fish and Wildlife Service spent US$825 000 in 2001 to manage all aquatic invaders in all US lakes. Thus, greater investment in prevention is warranted.

  11. Evaluation of linearly solvable Markov decision process with dynamic model learning in a mobile robot navigation task.

    PubMed

    Kinjo, Ken; Uchibe, Eiji; Doya, Kenji

    2013-01-01

    Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.

  12. Evidence for composite cost functions in arm movement planning: an inverse optimal control approach.

    PubMed

    Berret, Bastien; Chiovetto, Enrico; Nori, Francesco; Pozzo, Thierry

    2011-10-01

    An important issue in motor control is understanding the basic principles underlying the accomplishment of natural movements. According to optimal control theory, the problem can be stated in these terms: what cost function do we optimize to coordinate the many more degrees of freedom than necessary to fulfill a specific motor goal? This question has not received a final answer yet, since what is optimized partly depends on the requirements of the task. Many cost functions were proposed in the past, and most of them were found to be in agreement with experimental data. Therefore, the actual principles on which the brain relies to achieve a certain motor behavior are still unclear. Existing results might suggest that movements are not the results of the minimization of single but rather of composite cost functions. In order to better clarify this last point, we consider an innovative experimental paradigm characterized by arm reaching with target redundancy. Within this framework, we make use of an inverse optimal control technique to automatically infer the (combination of) optimality criteria that best fit the experimental data. Results show that the subjects exhibited a consistent behavior during each experimental condition, even though the target point was not prescribed in advance. Inverse and direct optimal control together reveal that the average arm trajectories were best replicated when optimizing the combination of two cost functions, nominally a mix between the absolute work of torques and the integrated squared joint acceleration. Our results thus support the cost combination hypothesis and demonstrate that the recorded movements were closely linked to the combination of two complementary functions related to mechanical energy expenditure and joint-level smoothness.

  13. Towards a hierarchical optimization modeling framework for ...

    EPA Pesticide Factsheets

    Background:Bilevel optimization has been recognized as a 2-player Stackelberg game where players are represented as leaders and followers and each pursue their own set of objectives. Hierarchical optimization problems, which are a generalization of bilevel, are especially difficult because the optimization is nested, meaning that the objectives of one level depend on solutions to the other levels. We introduce a hierarchical optimization framework for spatially targeting multiobjective green infrastructure (GI) incentive policies under uncertainties related to policy budget, compliance, and GI effectiveness. We demonstrate the utility of the framework using a hypothetical urban watershed, where the levels are characterized by multiple levels of policy makers (e.g., local, regional, national) and policy followers (e.g., landowners, communities), and objectives include minimization of policy cost, implementation cost, and risk; reduction of combined sewer overflow (CSO) events; and improvement in environmental benefits such as reduced nutrient run-off and water availability. Conclusions: While computationally expensive, this hierarchical optimization framework explicitly simulates the interaction between multiple levels of policy makers (e.g., local, regional, national) and policy followers (e.g., landowners, communities) and is especially useful for constructing and evaluating environmental and ecological policy. Using the framework with a hypothetical urba

  14. A framework for optimization and quantification of uncertainty and sensitivity for developing carbon capture systems

    DOE PAGES

    Eslick, John C.; Ng, Brenda; Gao, Qianwen; ...

    2014-12-31

    Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less

  15. Optimal control of native predators

    USGS Publications Warehouse

    Martin, Julien; O'Connell, Allan F.; Kendall, William L.; Runge, Michael C.; Simons, Theodore R.; Waldstein, Arielle H.; Schulte, Shiloh A.; Converse, Sarah J.; Smith, Graham W.; Pinion, Timothy; Rikard, Michael; Zipkin, Elise F.

    2010-01-01

    We apply decision theory in a structured decision-making framework to evaluate how control of raccoons (Procyon lotor), a native predator, can promote the conservation of a declining population of American Oystercatchers (Haematopus palliatus) on the Outer Banks of North Carolina. Our management objective was to maintain Oystercatcher productivity above a level deemed necessary for population recovery while minimizing raccoon removal. We evaluated several scenarios including no raccoon removal, and applied an adaptive optimization algorithm to account for parameter uncertainty. We show how adaptive optimization can be used to account for uncertainties about how raccoon control may affect Oystercatcher productivity. Adaptive management can reduce this type of uncertainty and is particularly well suited for addressing controversial management issues such as native predator control. The case study also offers several insights that may be relevant to the optimal control of other native predators. First, we found that stage-specific removal policies (e.g., yearling versus adult raccoon removals) were most efficient if the reproductive values among stage classes were very different. Second, we found that the optimal control of raccoons would result in higher Oystercatcher productivity than the minimum levels recommended for this species. Third, we found that removing more raccoons initially minimized the total number of removals necessary to meet long term management objectives. Finally, if for logistical reasons managers cannot sustain a removal program by removing a minimum number of raccoons annually, managers may run the risk of creating an ecological trap for Oystercatchers.

  16. Coarse-graining errors and numerical optimization using a relative entropy framework

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2011-03-01

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.

  17. Effective algorithm for solving complex problems of production control and of material flows control of industrial enterprise

    NASA Astrophysics Data System (ADS)

    Mezentsev, Yu A.; Baranova, N. V.

    2018-05-01

    A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.

  18. Attention control learning in the decision space using state estimation

    NASA Astrophysics Data System (ADS)

    Gharaee, Zahra; Fatehi, Alireza; Mirian, Maryam S.; Nili Ahmadabadi, Majid

    2016-05-01

    The main goal of this paper is modelling attention while using it in efficient path planning of mobile robots. The key challenge in concurrently aiming these two goals is how to make an optimal, or near-optimal, decision in spite of time and processing power limitations, which inherently exist in a typical multi-sensor real-world robotic application. To efficiently recognise the environment under these two limitations, attention of an intelligent agent is controlled by employing the reinforcement learning framework. We propose an estimation method using estimated mixture-of-experts task and attention learning in perceptual space. An agent learns how to employ its sensory resources, and when to stop observing, by estimating its perceptual space. In this paper, static estimation of the state space in a learning task problem, which is examined in the WebotsTM simulator, is performed. Simulation results show that a robot learns how to achieve an optimal policy with a controlled cost by estimating the state space instead of continually updating sensory information.

  19. Microgrid energy dispatching for industrial zones with renewable generations and electric vehicles via stochastic optimization and learning

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Li, Jingzhi; He, Zhubin; Yan, Wanfeng

    2018-07-01

    In this paper, a stochastic optimization framework is proposed to address the microgrid energy dispatching problem with random renewable generation and vehicle activity pattern, which is closer to the practical applications. The patterns of energy generation, consumption and storage availability are all random and unknown at the beginning, and the microgrid controller design (MCD) is formulated as a Markov decision process (MDP). Hence, an online learning-based control algorithm is proposed for the microgrid, which could adapt the control policy with increasing knowledge of the system dynamics and converges to the optimal algorithm. We adopt the linear approximation idea to decompose the original value functions as the summation of each per-battery value function. As a consequence, the computational complexity is significantly reduced from exponential growth to linear growth with respect to the size of battery states. Monte Carlo simulation of different scenarios demonstrates the effectiveness and efficiency of our algorithm.

  20. Optimization and real-time control for laser treatment of heterogeneous soft tissues.

    PubMed

    Feng, Yusheng; Fuentes, David; Hawkins, Andrea; Bass, Jon M; Rylander, Marissa Nichole

    2009-01-01

    Predicting the outcome of thermotherapies in cancer treatment requires an accurate characterization of the bioheat transfer processes in soft tissues. Due to the biological and structural complexity of tumor (soft tissue) composition and vasculature, it is often very difficult to obtain reliable tissue properties that is one of the key factors for the accurate treatment outcome prediction. Efficient algorithms employing in vivo thermal measurements to determine heterogeneous thermal tissues properties in conjunction with a detailed sensitivity analysis can produce essential information for model development and optimal control. The goals of this paper are to present a general formulation of the bioheat transfer equation for heterogeneous soft tissues, review models and algorithms developed for cell damage, heat shock proteins, and soft tissues with nanoparticle inclusion, and demonstrate an overall computational strategy for developing a laser treatment framework with the ability to perform real-time robust calibrations and optimal control. This computational strategy can be applied to other thermotherapies using the heat source such as radio frequency or high intensity focused ultrasound.

  1. An auto-adaptive optimization approach for targeting nonpoint source pollution control practices.

    PubMed

    Chen, Lei; Wei, Guoyuan; Shen, Zhenyao

    2015-10-21

    To solve computationally intensive and technically complex control of nonpoint source pollution, the traditional genetic algorithm was modified into an auto-adaptive pattern, and a new framework was proposed by integrating this new algorithm with a watershed model and an economic module. Although conceptually simple and comprehensive, the proposed algorithm would search automatically for those Pareto-optimality solutions without a complex calibration of optimization parameters. The model was applied in a case study in a typical watershed of the Three Gorges Reservoir area, China. The results indicated that the evolutionary process of optimization was improved due to the incorporation of auto-adaptive parameters. In addition, the proposed algorithm outperformed the state-of-the-art existing algorithms in terms of convergence ability and computational efficiency. At the same cost level, solutions with greater pollutant reductions could be identified. From a scientific viewpoint, the proposed algorithm could be extended to other watersheds to provide cost-effective configurations of BMPs.

  2. A variable-gain output feedback control design methodology

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.

    1989-01-01

    A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.

  3. Optimal control, investment and utilization schemes for energy storage under uncertainty

    NASA Astrophysics Data System (ADS)

    Mirhosseini, Niloufar Sadat

    Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency operations; (ii) optimal market strategy of buy and sell over 24-hour periods; (iii) optimal storage charge and discharge in much shorter time intervals.

  4. Online Optimal Control of Connected Vehicles for Efficient Traffic Flow at Merging Roads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios-Torres, Jackeline; Malikopoulos, Andreas; Pisu, Pierluigi

    2015-01-01

    This paper addresses the problem of coordinating online connected vehicles at merging roads to achieve a smooth traffic flow without stop-and-go driving. We present a framework and a closed-form solution that optimize the acceleration profile of each vehicle in terms of fuel economy while avoiding collision with other vehicles at the merging zone. The proposed solution is validated through simulation and it is shown that coordination of connected vehicles can reduce significantly fuel consumption and travel time at merging roads.

  5. Controlling herding in minority game systems

    NASA Astrophysics Data System (ADS)

    Zhang, Ji-Qiang; Huang, Zi-Gang; Wu, Zhi-Xi; Su, Riqi; Lai, Ying-Cheng

    2016-02-01

    Resource allocation takes place in various types of real-world complex systems such as urban traffic, social services institutions, economical and ecosystems. Mathematically, the dynamical process of resource allocation can be modeled as minority games. Spontaneous evolution of the resource allocation dynamics, however, often leads to a harmful herding behavior accompanied by strong fluctuations in which a large majority of agents crowd temporarily for a few resources, leaving many others unused. Developing effective control methods to suppress and eliminate herding is an important but open problem. Here we develop a pinning control method, that the fluctuations of the system consist of intrinsic and systematic components allows us to design a control scheme with separated control variables. A striking finding is the universal existence of an optimal pinning fraction to minimize the variance of the system, regardless of the pinning patterns and the network topology. We carry out a generally applicable theory to explain the emergence of optimal pinning and to predict the dependence of the optimal pinning fraction on the network topology. Our work represents a general framework to deal with the broader problem of controlling collective dynamics in complex systems with potential applications in social, economical and political systems.

  6. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework

    EPA Science Inventory

    Best management practices (BMPs) are perceived as being effective in reducing nutrient loads transported from non-point sources (NPS) to receiving water bodies. The objective of this study was to develop a modeling-optimization framework that can be used by watershed management p...

  7. A Workflow for Subsurface Pressure Control in Geological CO2 Storage: Optimization of Brine Extraction

    NASA Astrophysics Data System (ADS)

    Birkholzer, J. T.; Gonzalez-Nicolas, A.; Cihan, A.

    2017-12-01

    Industrial-scale injection of CO2 into the subsurface increases the fluid pressure in the reservoir, sometimes to the point that the resulting stress increases must be properly controlled to prevent potential damaging impacts such as fault activation, leakage through abandoned wells, or caprock fracturing. Brine extraction is one approach for managing formation pressure, effective stress, and plume movement in response to CO2 injection. However, the management of the extracted brine adds cost to the carbon capture and sequestration operations; therefore optimizing (minimizing) the extraction volume of brine is of great importance. In this study, we apply an adaptive management approach that optimizes extraction rates of brine for pressure control in an integrated optimization framework involving site monitoring, model calibration, and optimization. We investigate the optimization performance as affected by initial site characterization data and introduction of newly acquired data during the injection phase. More accurate initial reservoir characterization data reduce the risk of pressure buildup damage with better estimations of initial extraction rates, which results in better control of pressure during the overall injection time periods. Results also show that low frequencies of model calibration and optimization with the new data, especially at early injection periods, may lead to optimization problems, either that pressure buildup constraints are violated or excessively high extraction rates are proposed. These optimization problems can be eliminated if more frequent data collection and model calibration are conducted, especially at early injection time periods. Approaches such as adaptive pressure management may constitute an effective tool to manage pressure buildup under uncertain and unknown reservoir conditions by minimizing the brine extraction volumes while not exceeding critical pressure buildups of the reservoir.

  8. GPU-based optimal control for RWM feedback in tokamaks

    DOE PAGES

    Clement, Mitchell; Hanson, Jeremy; Bialek, Jim; ...

    2017-08-23

    The design and implementation of a Graphics Processing Unit (GPU) based Resistive Wall Mode (RWM) controller to perform feedback control on the RWM using Linear Quadratic Gaussian (LQG) control is reported herein. Also, the control algorithm is based on a simplified DIII-D VALEN model. By using NVIDIA’s GPUDirect RDMA framework, the digitizer and output module are able to write and read directly to and from GPU memory, eliminating memory transfers between host and GPU. In conclusion, the system and algorithm was able to reduce plasma response excited by externally applied fields by 32% during development experiments.

  9. GPU-based optimal control for RWM feedback in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clement, Mitchell; Hanson, Jeremy; Bialek, Jim

    The design and implementation of a Graphics Processing Unit (GPU) based Resistive Wall Mode (RWM) controller to perform feedback control on the RWM using Linear Quadratic Gaussian (LQG) control is reported herein. Also, the control algorithm is based on a simplified DIII-D VALEN model. By using NVIDIA’s GPUDirect RDMA framework, the digitizer and output module are able to write and read directly to and from GPU memory, eliminating memory transfers between host and GPU. In conclusion, the system and algorithm was able to reduce plasma response excited by externally applied fields by 32% during development experiments.

  10. Proxy functions for turbulent transport optimization of stellarators

    NASA Astrophysics Data System (ADS)

    Rorvig, Mordechai; Hegna, Chris; Mynick, Harry; Xanthopoulos, Pavlos

    2012-10-01

    The design freedom of toroidal confinement shaping suggests the possibility of optimizing the magnetic geometry for turbulent transport, particularly in stellarators. The framework for implementing such an optimization was recently established [1] using a proxy function as a measure of the ITG induced turbulent transport associated with a given geometry. Working in the framework of local 3-D equilibrium [2], we investigate the theory and implications of such proxy functions by analyzing the linear instability dependence on curvature and local shear, and the associated quasilinear transport estimates. Simple analytic models suggest the beneficial effect of local shear enters through polarization effects, which can be controlled by field torsion in small net current regimes. We test the proxy functions with local, electrostatic gyrokinetics calculations [3] of ITG modes for experimentally motivated local 3-D equilibria.[4pt] [1] H. E. Mynick, N. Pomphrey, and P. Xanthopoulos, Phys. Rev. Lett. 105, 095004 (2010).[0pt] [2] C. C. Hegna, Physics of Plasmas 7, 3921 (2000).[0pt] [3] F. Jenko, W. Dorland, M. Kotschenreuther, and B. N. Rogers, Physical Review Letters 7, 1904 (2000).

  11. Optimizing Compliance and Thermal Conductivity of Plasma Sprayed Thermal Barrier Coatings via Controlled Powders and Processing Strategies

    NASA Astrophysics Data System (ADS)

    Tan, Yang; Srinivasan, Vasudevan; Nakamura, Toshio; Sampath, Sanjay; Bertrand, Pierre; Bertrand, Ghislaine

    2012-09-01

    The properties and performance of plasma-sprayed thermal barrier coatings (TBCs) are strongly dependent on the microstructural defects, which are affected by starting powder morphology and processing conditions. Of particular interest is the use of hollow powders which not only allow for efficient melting of zirconia ceramics but also produce lower conductivity and more compliant coatings. Typical industrial hollow spray powders have an assortment of densities resulting in masking potential advantages of the hollow morphology. In this study, we have conducted process mapping strategies using a novel uniform shell thickness hollow powder to control the defect microstructure and properties. Correlations among coating properties, microstructure, and processing reveal feasibility to produce highly compliant and low conductivity TBC through a combination of optimized feedstock and processing conditions. The results are presented through the framework of process maps establishing correlations among process, microstructure, and properties and providing opportunities for optimization of TBCs.

  12. Efficiency Management in Spaceflight Systems

    NASA Technical Reports Server (NTRS)

    Murphy, Karen

    2016-01-01

    Efficiency in spaceflight is often approached as “faster, better, cheaper – pick two”. The high levels of performance and reliability required for each mission suggest that planners can only control for two of the three. True efficiency comes by optimizing a system across all three parameters. The functional processes of spaceflight become technical requirements on three operational groups during mission planning: payload, vehicle, and launch operations. Given the interrelationships among the functions performed by the operational groups, optimizing function resources from one operational group to the others affects the efficiency of those groups and therefore the mission overall. This paper helps outline this framework and creates a context in which to understand the effects of resource trades on the overall system, improving the efficiency of the operational groups and the mission as a whole. This allows insight into and optimization of the controlling factors earlier in the mission planning stage.

  13. Optimal deployment of resources for maximizing impact in spreading processes

    PubMed Central

    2017-01-01

    The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of “influential spreaders” for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distribution of available resources hence results from an interplay between network topology and spreading dynamics. We show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on a variety of real-world examples. PMID:28900013

  14. An Optimization Framework for Dynamic Hybrid Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis

    A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problemmore » takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.« less

  15. Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness

    PubMed Central

    Pimentel-Niño, M. A.; Saxena, Paresh; Vazquez-Castro, M. A.

    2015-01-01

    A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture. PMID:26247057

  16. Spatial optimization of watershed management practices for nitrogen load reduction using a modeling-optimization framework.

    PubMed

    Yang, Guoxiang; Best, Elly P H

    2015-09-15

    Best management practices (BMPs) can be used effectively to reduce nutrient loads transported from non-point sources to receiving water bodies. However, methodologies of BMP selection and placement in a cost-effective way are needed to assist watershed management planners and stakeholders. We developed a novel modeling-optimization framework that can be used to find cost-effective solutions of BMP placement to attain nutrient load reduction targets. This was accomplished by integrating a GIS-based BMP siting method, a WQM-TMDL-N modeling approach to estimate total nitrogen (TN) loading, and a multi-objective optimization algorithm. Wetland restoration and buffer strip implementation were the two BMP categories used to explore the performance of this framework, both differing greatly in complexity of spatial analysis for site identification. Minimizing TN load and BMP cost were the two objective functions for the optimization process. The performance of this framework was demonstrated in the Tippecanoe River watershed, Indiana, USA. Optimized scenario-based load reduction indicated that the wetland subset selected by the minimum scenario had the greatest N removal efficiency. Buffer strips were more effective for load removal than wetlands. The optimized solutions provided a range of trade-offs between the two objective functions for both BMPs. This framework can be expanded conveniently to a regional scale because the NHDPlus catchment serves as its spatial computational unit. The present study demonstrated the potential of this framework to find cost-effective solutions to meet a water quality target, such as a 20% TN load reduction, under different conditions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. An Agent-Based Optimization Framework for Engineered Complex Adaptive Systems with Application to Demand Response in Electricity Markets

    NASA Astrophysics Data System (ADS)

    Haghnevis, Moeed

    The main objective of this research is to develop an integrated method to study emergent behavior and consequences of evolution and adaptation in engineered complex adaptive systems (ECASs). A multi-layer conceptual framework and modeling approach including behavioral and structural aspects is provided to describe the structure of a class of engineered complex systems and predict their future adaptive patterns. The approach allows the examination of complexity in the structure and the behavior of components as a result of their connections and in relation to their environment. This research describes and uses the major differences of natural complex adaptive systems (CASs) with artificial/engineered CASs to build a framework and platform for ECAS. While this framework focuses on the critical factors of an engineered system, it also enables one to synthetically employ engineering and mathematical models to analyze and measure complexity in such systems. In this way concepts of complex systems science are adapted to management science and system of systems engineering. In particular an integrated consumer-based optimization and agent-based modeling (ABM) platform is presented that enables managers to predict and partially control patterns of behaviors in ECASs. Demonstrated on the U.S. electricity markets, ABM is integrated with normative and subjective decision behavior recommended by the U.S. Department of Energy (DOE) and Federal Energy Regulatory Commission (FERC). The approach integrates social networks, social science, complexity theory, and diffusion theory. Furthermore, it has unique and significant contribution in exploring and representing concrete managerial insights for ECASs and offering new optimized actions and modeling paradigms in agent-based simulation.

  18. Linked population and economic models: some methodological issues in forecasting, analysis, and policy optimization.

    PubMed

    Madden, M; Batey Pwj

    1983-05-01

    Some problems associated with demographic-economic forecasting include finding models appropriate for a declining economy with unemployment, using a multiregional approach in an interregional model, finding a way to show differential consumption while endogenizing unemployment, and avoiding unemployment inconsistencies. The solution to these problems involves the construction of an activity-commodity framework, locating it within a group of forecasting models, and indicating possible ratios towards dynamization of the framework. The authors demonstrate the range of impact multipliers that can be derived from the framework and show how these multipliers relate to Leontief input-output multipliers. It is shown that desired population distribution may be obtained by selecting instruments from the economic sphere to produce, through the constraints vector of an activity-commodity framework, targets selected from demographic activities. The next step in this process, empirical exploitation, was carried out by the authors in the United Kingdom, linking an input-output model with a wide selection of demographic and demographic-economic variables. The generally tenuous control which government has over any variables in systems of this type, especially in market economies, makes application in the policy field of the optimization approach a partly conjectural exercise, although the analytic capacity of the approach can provide clear indications of policy directions.

  19. Carbon fibre versus metal framework in full-arch immediate loading rehabilitations of the maxilla - a cohort clinical study.

    PubMed

    Pera, F; Pesce, P; Solimano, F; Tealdo, T; Pera, P; Menini, M

    2017-05-01

    Frameworks made of carbon fibre-reinforced composites (CFRC) seem to be a viable alternative to traditional metal frameworks in implant prosthodontics. CFRC provide stiffness, rigidity and optimal biocompatibility. The aim of the present prospective study was to compare carbon fibre frameworks versus metal frameworks used to rigidly splint implants in full-arch immediate loading rehabilitations. Forty-two patients (test group) were rehabilitated with full-arch immediate loading rehabilitations of the upper jaw (total: 170 implants) following the Columbus Bridge Protocol with four to six implants with distal tilted implants. All patients were treated with resin screw-retained full-arch prostheses endowed with carbon fibre frameworks. The mean follow-up was 22 months (range: 18-24). Differences in the absolute change of bone resorption over time between the two implant sides (mesial and distal) were assessed performing a Mann-Whitney U-test. The outcomes were statistically compared with those of patients rehabilitated following the same protocol but using metal frameworks (control group: 34 patients with 163 implants - data reported in Tealdo, Menini, Bevilacqua, Pera, Pesce, Signori, Pera, Int J Prosthodont, 27, 2014, 207). Ten implants failed in the control group (6·1%); none failed in the test group (P = 0·002). A statistically significant difference in the absolute change of bone resorption around the implants was found between the two groups (P = 0·004), with greater mean peri-implant bone resorption in the control group (1 mm) compared to the test group (0·8 mm). Carbon fibre frameworks may be considered as a viable alternative to the metal ones and showed less marginal bone loss around implants and a greater implant survival rate during the observation period. © 2017 John Wiley & Sons Ltd.

  20. Determining effective forecast horizons for multi-purpose reservoirs with short- and long-term operating objectives

    NASA Astrophysics Data System (ADS)

    Luchner, Jakob; Anghileri, Daniela; Castelletti, Andrea

    2017-04-01

    Real-time control of multi-purpose reservoirs can benefit significantly from hydro-meteorological forecast products. Because of their reliability, the most used forecasts range on time scales from hours to few days and are suitable for short-term operation targets such as flood control. In recent years, hydro-meteorological forecasts have become more accurate and reliable on longer time scales, which are more relevant to long-term reservoir operation targets such as water supply. While the forecast quality of such products has been studied extensively, the forecast value, i.e. the operational effectiveness of using forecasts to support water management, has been only relatively explored. It is comparatively easy to identify the most effective forecasting information needed to design reservoir operation rules for flood control but it is not straightforward to identify which forecast variable and lead time is needed to define effective hedging rules for operational targets with slow dynamics such as water supply. The task is even more complex when multiple targets, with diverse slow and fast dynamics, are considered at the same time. In these cases, the relative importance of different pieces of information, e.g. magnitude and timing of peak flow rate and accumulated inflow on different time lags, may vary depending on the season or the hydrological conditions. In this work, we analyze the relationship between operational forecast value and streamflow forecast horizon for different multi-purpose reservoir trade-offs. We use the Information Selection and Assessment (ISA) framework to identify the most effective forecast variables and horizons for informing multi-objective reservoir operation over short- and long-term temporal scales. The ISA framework is an automatic iterative procedure to discriminate the information with the highest potential to improve multi-objective reservoir operating performance. Forecast variables and horizons are selected using a feature selection technique. The technique determines the most informative combination in a multi-variate regression model to the optimal reservoir releases based on perfect information at a fixed objective trade-off. The improved reservoir operation is evaluated against optimal reservoir operation conditioned upon perfect information on future disturbances and basic reservoir operation using only the day of the year and the reservoir level. Different objective trade-offs are selected for analyzing resulting differences in improved reservoir operation and selected forecast variables and horizons. For comparison, the effective streamflow forecast horizon determined by the ISA framework is benchmarked against the performances obtained with a deterministic model predictive control (MPC) optimization scheme. Both the ISA framework and the MPC optimization scheme are applied to the real-world case study of Lake Como, Italy, using perfect streamflow forecast information. The principal operation targets for Lake Como are flood control and downstream water supply which makes its operation a suitable case study. Results provide critical feedback to reservoir operators on the use of long-term streamflow forecasts and to the hydro-meteorological forecasting community with respect to the forecast horizon needed from reliable streamflow forecasts.

  1. Program and Project Management Framework

    NASA Technical Reports Server (NTRS)

    Butler, Cassandra D.

    2002-01-01

    The primary objective of this project was to develop a framework and system architecture for integrating program and project management tools that may be applied consistently throughout Kennedy Space Center (KSC) to optimize planning, cost estimating, risk management, and project control. Project management methodology used in building interactive systems to accommodate the needs of the project managers is applied as a key component in assessing the usefulness and applicability of the framework and tools developed. Research for the project included investigation and analysis of industrial practices, KSC standards, policies, and techniques, Systems Management Office (SMO) personnel, and other documented experiences of project management experts. In addition, this project documents best practices derived from the literature as well as new or developing project management models, practices, and techniques.

  2. The linear regulator problem for parabolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Kunisch, K.

    1983-01-01

    An approximation framework is presented for computation (in finite imensional spaces) of Riccati operators that can be guaranteed to converge to the Riccati operator in feedback controls for abstract evolution systems in a Hilbert space. It is shown how these results may be used in the linear optimal regulator problem for a large class of parabolic systems.

  3. Development of the framework for a water quality monitoring system : controlling MoDOT's contribution to 303(d) listed streams in the state of Missouri, final report, February 2010.

    DOT National Transportation Integrated Search

    2010-02-01

    By utilizing ArcGIS to quickly visualize the location of any impaired waterbody in relation to its projects/activities, MoDOT will : be able to allocate resources optimally. Additionally, the Water Quality Impact Database (WQID) will allow easy trans...

  4. On the control of riverbed incision induced by run-of-river power plant

    NASA Astrophysics Data System (ADS)

    Bizzi, Simone; Dinh, Quang; Bernardi, Dario; Denaro, Simona; Schippa, Leonardo; Soncini-Sessa, Rodolfo

    2015-07-01

    Water resource management (WRM) through dams or reservoirs is worldwide necessary to support key human-related activities, ranging from hydropower production to water allocation and flood risk mitigation. Designing of reservoir operations aims primarily to fulfill the main purpose (or purposes) for which the structure has been built. However, it is well known that reservoirs strongly influence river geomorphic processes, causing sediment deficits downstream, altering water, and sediment fluxes, leading to riverbed incision and causing infrastructure instability and ecological degradation. We propose a framework that, by combining physically based modeling, surrogate modeling techniques, and multiobjective (MO) optimization, allows to include fluvial geomorphology into MO optimization whose main objectives are the maximization of hydropower revenue and the minimization of riverbed degradation. The case study is a run-of-the-river power plant on the River Po (Italy). A 1-D mobile-bed hydro-morphological model simulated the riverbed evolution over a 10 year horizon for alternatives operation rules of the power plant. The knowledge provided by such a physically based model is integrated into a MO optimization routine via surrogate modeling using the response surface methodology. Hence, this framework overcomes the high computational costs that so far hindered the integration of river geomorphology into WRM. We provided numerical proof that river morphologic processes and hydropower production are indeed in conflict but that the conflict may be mitigated with appropriate control strategies.

  5. Modeling and Advanced Control for Sustainable Process ...

    EPA Pesticide Factsheets

    This book chapter introduces a novel process systems engineering framework that integrates process control with sustainability assessment tools for the simultaneous evaluation and optimization of process operations. The implemented control strategy consists of a biologically-inspired, multi-agent-based method. The sustainability and performance assessment of process operating points is carried out using the U.S. E.P.A.’s GREENSCOPE assessment tool that provides scores for the selected economic, material management, environmental and energy indicators. The indicator results supply information on whether the implementation of the controller is moving the process towards a more sustainable operation. The effectiveness of the proposed framework is illustrated through a case study of a continuous bioethanol fermentation process whose dynamics are characterized by steady-state multiplicity and oscillatory behavior. This book chapter contribution demonstrates the application of novel process control strategies for sustainability by increasing material management, energy efficiency, and pollution prevention, as needed for SHC Sustainable Uses of Wastes and Materials Management.

  6. Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kouri, Drew Philip; Surowiec, Thomas M.

    Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less

  7. Existence and Optimality Conditions for Risk-Averse PDE-Constrained Optimization

    DOE PAGES

    Kouri, Drew Philip; Surowiec, Thomas M.

    2018-06-05

    Uncertainty is ubiquitous in virtually all engineering applications, and, for such problems, it is inadequate to simulate the underlying physics without quantifying the uncertainty in unknown or random inputs, boundary and initial conditions, and modeling assumptions. Here in this paper, we introduce a general framework for analyzing risk-averse optimization problems constrained by partial differential equations (PDEs). In particular, we postulate conditions on the random variable objective function as well as the PDE solution that guarantee existence of minimizers. Furthermore, we derive optimality conditions and apply our results to the control of an environmental contaminant. Lastly, we introduce a new riskmore » measure, called the conditional entropic risk, that fuses desirable properties from both the conditional value-at-risk and the entropic risk measures.« less

  8. Optimal Environmental Conditions and Anomalous Ecosystem Responses: Constraining Bottom-up Controls of Phytoplankton Biomass in the California Current System

    PubMed Central

    Jacox, Michael G.; Hazen, Elliott L.; Bograd, Steven J.

    2016-01-01

    In Eastern Boundary Current systems, wind-driven upwelling drives nutrient-rich water to the ocean surface, making these regions among the most productive on Earth. Regulation of productivity by changing wind and/or nutrient conditions can dramatically impact ecosystem functioning, though the mechanisms are not well understood beyond broad-scale relationships. Here, we explore bottom-up controls during the California Current System (CCS) upwelling season by quantifying the dependence of phytoplankton biomass (as indicated by satellite chlorophyll estimates) on two key environmental parameters: subsurface nitrate concentration and surface wind stress. In general, moderate winds and high nitrate concentrations yield maximal biomass near shore, while offshore biomass is positively correlated with subsurface nitrate concentration. However, due to nonlinear interactions between the influences of wind and nitrate, bottom-up control of phytoplankton cannot be described by either one alone, nor by a combined metric such as nitrate flux. We quantify optimal environmental conditions for phytoplankton, defined as the wind/nitrate space that maximizes chlorophyll concentration, and present a framework for evaluating ecosystem change relative to environmental drivers. The utility of this framework is demonstrated by (i) elucidating anomalous CCS responses in 1998–1999, 2002, and 2005, and (ii) providing a basis for assessing potential biological impacts of projected climate change. PMID:27278260

  9. Reduced Design Load Basis for Ultimate Blade Loads Estimation in Multidisciplinary Design Optimization Frameworks

    NASA Astrophysics Data System (ADS)

    Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.; Kim, Taeseong; Thomsen, Kenneth

    2016-09-01

    The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar to the one computed using the full DLB. This observation is used to propose a scheme that is computationally cheap, and that can be integrated inside an MDO framework, providing a sufficiently reliable estimation of the blade ultimate loading. The latter aspect is of key importance when design variables implementing passive control methodologies are included in the formulation of the optimization problem. An MDO of a 10 MW wind turbine blade is presented as an applied case study to show the efficacy of the reduced DLB concept.

  10. Texas two-step: a framework for optimal multi-input single-output deconvolution.

    PubMed

    Neelamani, Ramesh; Deffenbaugh, Max; Baraniuk, Richard G

    2007-11-01

    Multi-input single-output deconvolution (MISO-D) aims to extract a deblurred estimate of a target signal from several blurred and noisy observations. This paper develops a new two step framework--Texas Two-Step--to solve MISO-D problems with known blurs. Texas Two-Step first reduces the MISO-D problem to a related single-input single-output deconvolution (SISO-D) problem by invoking the concept of sufficient statistics (SSs) and then solves the simpler SISO-D problem using an appropriate technique. The two-step framework enables new MISO-D techniques (both optimal and suboptimal) based on the rich suite of existing SISO-D techniques. In fact, the properties of SSs imply that a MISO-D algorithm is mean-squared-error optimal if and only if it can be rearranged to conform to the Texas Two-Step framework. Using this insight, we construct new wavelet- and curvelet-based MISO-D algorithms with asymptotically optimal performance. Simulated and real data experiments verify that the framework is indeed effective.

  11. A framework for designing and analyzing binary decision-making strategies in cellular systems†

    PubMed Central

    Porter, Joshua R.; Andrews, Burton W.; Iglesias, Pablo A.

    2015-01-01

    Cells make many binary (all-or-nothing) decisions based on noisy signals gathered from their environment and processed through noisy decision-making pathways. Reducing the effect of noise to improve the fidelity of decision-making comes at the expense of increased complexity, creating a tradeoff between performance and metabolic cost. We present a framework based on rate distortion theory, a branch of information theory, to quantify this tradeoff and design binary decision-making strategies that balance low cost and accuracy in optimal ways. With this framework, we show that several observed behaviors of binary decision-making systems, including random strategies, hysteresis, and irreversibility, are optimal in an information-theoretic sense for various situations. This framework can also be used to quantify the goals around which a decision-making system is optimized and to evaluate the optimality of cellular decision-making systems by a fundamental information-theoretic criterion. As proof of concept, we use the framework to quantify the goals of the externally triggered apoptosis pathway. PMID:22370552

  12. Variable cycle control model for intersection based on multi-source information

    NASA Astrophysics Data System (ADS)

    Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan

    2018-05-01

    In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.

  13. Optimal population prediction of sandhill crane recruitment based on climate-mediated habitat limitations

    USGS Publications Warehouse

    Gerber, Brian D.; Kendall, William L.; Hooten, Mevin B.; Dubovsky, James A.; Drewien, Roderick C.

    2015-01-01

    Prediction is fundamental to scientific enquiry and application; however, ecologists tend to favour explanatory modelling. We discuss a predictive modelling framework to evaluate ecological hypotheses and to explore novel/unobserved environmental scenarios to assist conservation and management decision-makers. We apply this framework to develop an optimal predictive model for juvenile (<1 year old) sandhill crane Grus canadensis recruitment of the Rocky Mountain Population (RMP). We consider spatial climate predictors motivated by hypotheses of how drought across multiple time-scales and spring/summer weather affects recruitment.Our predictive modelling framework focuses on developing a single model that includes all relevant predictor variables, regardless of collinearity. This model is then optimized for prediction by controlling model complexity using a data-driven approach that marginalizes or removes irrelevant predictors from the model. Specifically, we highlight two approaches of statistical regularization, Bayesian least absolute shrinkage and selection operator (LASSO) and ridge regression.Our optimal predictive Bayesian LASSO and ridge regression models were similar and on average 37% superior in predictive accuracy to an explanatory modelling approach. Our predictive models confirmed a priori hypotheses that drought and cold summers negatively affect juvenile recruitment in the RMP. The effects of long-term drought can be alleviated by short-term wet spring–summer months; however, the alleviation of long-term drought has a much greater positive effect on juvenile recruitment. The number of freezing days and snowpack during the summer months can also negatively affect recruitment, while spring snowpack has a positive effect.Breeding habitat, mediated through climate, is a limiting factor on population growth of sandhill cranes in the RMP, which could become more limiting with a changing climate (i.e. increased drought). These effects are likely not unique to cranes. The alteration of hydrological patterns and water levels by drought may impact many migratory, wetland nesting birds in the Rocky Mountains and beyond.Generalizable predictive models (trained by out-of-sample fit and based on ecological hypotheses) are needed by conservation and management decision-makers. Statistical regularization improves predictions and provides a general framework for fitting models with a large number of predictors, even those with collinearity, to simultaneously identify an optimal predictive model while conducting rigorous Bayesian model selection. Our framework is important for understanding population dynamics under a changing climate and has direct applications for making harvest and habitat management decisions.

  14. The economics of optimal urban groundwater management in southwestern USA

    NASA Astrophysics Data System (ADS)

    Hansen, Jason K.

    2012-08-01

    Groundwater serves as the primary water source for approximately 80% of public water systems in the United States, and for many more as a secondary source. Traditionally management relies on groundwater to meet rising demand by increasing supply, but climate uncertainty and population growth require more judicious management to achieve efficiency and sustainability. Over-pumping leads to groundwater overdraft and jeopardizes the ability of future users to depend on the resource. Optimal urban groundwater pumping can play a role in solving this conundrum. This paper investigates to what extent and under what circumstances controlled pumping improves social welfare. It considers management in a hydro-economic framework and finds the optimal pumping path and the optimal price path. These allow for the identification of the social benefit of controlled pumping, and the scarcity rent, which is one tool to sustainably manage groundwater resources. The model is numerically illustrated with a case study from Albuquerque, New Mexico (USA). The Albuquerque results indicate that, in the presence of strong demand growth, controlled pumping improves social welfare by 22%, extends use of the resource, and provides planners with a mechanism to advance the economic sustainability of groundwater.

  15. Inter and intra-modal deformable registration: continuous deformations meet efficient optimal linear programming.

    PubMed

    Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir

    2007-01-01

    In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.

  16. A high-fidelity airbus benchmark for system fault detection and isolation and flight control law clearance

    NASA Astrophysics Data System (ADS)

    Goupil, Ph.; Puyou, G.

    2013-12-01

    This paper presents a high-fidelity generic twin engine civil aircraft model developed by Airbus for advanced flight control system research. The main features of this benchmark are described to make the reader aware of the model complexity and representativeness. It is a complete representation including the nonlinear rigid-body aircraft model with a full set of control surfaces, actuator models, sensor models, flight control laws (FCL), and pilot inputs. Two applications of this benchmark in the framework of European projects are presented: FCL clearance using optimization and advanced fault detection and diagnosis (FDD).

  17. Optimism and well-being: a prospective multi-method and multi-dimensional examination of optimism as a resilience factor following the occurrence of stressful life events.

    PubMed

    Kleiman, Evan M; Chiara, Alexandra M; Liu, Richard T; Jager-Hyman, Shari G; Choi, Jimmy Y; Alloy, Lauren B

    2017-02-01

    Optimism has been conceptualised variously as positive expectations (PE) for the future , optimistic attributions , illusion of control , and self-enhancing biases. Relatively little research has examined these multiple dimensions of optimism in relation to psychological and physical health. The current study assessed the multi-dimensional nature of optimism within a prospective vulnerability-stress framework. Initial principal component analyses revealed the following dimensions: PEs, Inferential Style (IS), Sense of Invulnerability (SI), and Overconfidence (O). Prospective follow-up analyses demonstrated that PE was associated with fewer depressive episodes and moderated the effect of stressful life events on depressive symptoms. SI also moderated the effect of life stress on anxiety symptoms. Generally, our findings indicated that optimism is a multifaceted construct and not all forms of optimism have the same effects on well-being. Specifically, our findings indicted that PE may be the most relevant to depression, whereas SI may be the most relevant to anxiety.

  18. Nonlinear Model Predictive Control for Cooperative Control and Estimation

    NASA Astrophysics Data System (ADS)

    Ru, Pengkai

    Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.

  19. Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ampomah, William; Balch, Robert; Will, Robert

    This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less

  20. Co-optimization of CO 2 -EOR and Storage Processes under Geological Uncertainty

    DOE PAGES

    Ampomah, William; Balch, Robert; Will, Robert; ...

    2017-07-01

    This paper presents an integrated numerical framework to co-optimize EOR and CO 2 storage performance in the Farnsworth field unit (FWU), Ochiltree County, Texas. The framework includes a field-scale compositional reservoir flow model, an uncertainty quantification model and a neural network optimization process. The reservoir flow model has been constructed based on the field geophysical, geological, and engineering data. A laboratory fluid analysis was tuned to an equation of state and subsequently used to predict the thermodynamic minimum miscible pressure (MMP). A history match of primary and secondary recovery processes was conducted to estimate the reservoir and multiphase flow parametersmore » as the baseline case for analyzing the effect of recycling produced gas, infill drilling and water alternating gas (WAG) cycles on oil recovery and CO 2 storage. A multi-objective optimization model was defined for maximizing both oil recovery and CO 2 storage. The uncertainty quantification model comprising the Latin Hypercube sampling, Monte Carlo simulation, and sensitivity analysis, was used to study the effects of uncertain variables on the defined objective functions. Uncertain variables such as bottom hole injection pressure, WAG cycle, injection and production group rates, and gas-oil ratio among others were selected. The most significant variables were selected as control variables to be used for the optimization process. A neural network optimization algorithm was utilized to optimize the objective function both with and without geological uncertainty. The vertical permeability anisotropy (Kv/Kh) was selected as one of the uncertain parameters in the optimization process. The simulation results were compared to a scenario baseline case that predicted CO 2 storage of 74%. The results showed an improved approach for optimizing oil recovery and CO 2 storage in the FWU. The optimization process predicted more than 94% of CO 2 storage and most importantly about 28% of incremental oil recovery. The sensitivity analysis reduced the number of control variables to decrease computational time. A risk aversion factor was used to represent results at various confidence levels to assist management in the decision-making process. The defined objective functions were proved to be a robust approach to co-optimize oil recovery and CO 2 storage. The Farnsworth CO 2 project will serve as a benchmark for future CO 2–EOR or CCUS projects in the Anadarko basin or geologically similar basins throughout the world.« less

  1. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    NASA Astrophysics Data System (ADS)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of fitness-driven retention. This strategy capitalizes on the advantages of evolutionary algorithm as well as POD-based reduced order modeling, while overcoming the shortcomings inherent with these techniques. When linked with M3 DOE, this strategy offers a computationally efficient methodology for problems with high level of complexity and a challenging design-space. This newly developed framework is demonstrated for its robustness on a nonconventional supersonic tailless air vehicle wing shape optimization problem.

  2. Design Optimization of Hybrid FRP/RC Bridge

    NASA Astrophysics Data System (ADS)

    Papapetrou, Vasileios S.; Tamijani, Ali Y.; Brown, Jeff; Kim, Daewon

    2018-04-01

    The hybrid bridge consists of a Reinforced Concrete (RC) slab supported by U-shaped Fiber Reinforced Polymer (FRP) girders. Previous studies on similar hybrid bridges constructed in the United States and Europe seem to substantiate these hybrid designs for lightweight, high strength, and durable highway bridge construction. In the current study, computational and optimization analyses were carried out to investigate six composite material systems consisting of E-glass and carbon fibers. Optimization constraints are determined by stress, deflection and manufacturing requirements. Finite Element Analysis (FEA) and optimization software were utilized, and a framework was developed to run the complete analyses in an automated fashion. Prior to that, FEA validation of previous studies on similar U-shaped FRP girders that were constructed in Poland and Texas is presented. A finer optimization analysis is performed for the case of the Texas hybrid bridge. The optimization outcome of the hybrid FRP/RC bridge shows the appropriate composite material selection and cross-section geometry that satisfies all the applicable Limit States (LS) and, at the same time, results in the lightest design. Critical limit states show that shear stress criteria determine the optimum design for bridge spans less than 15.24 m and deflection criteria controls for longer spans. Increased side wall thickness can reduce maximum observed shear stresses, but leads to a high weight penalty. A taller cross-section and a thicker girder base can efficiently lower the observed deflections and normal stresses. Finally, substantial weight savings can be achieved by the optimization framework if base and side-wall thickness are treated as independent variables.

  3. A personalized medicine approach to the design of dry powder inhalers: Selecting the optimal amount of bypass.

    PubMed

    Kopsch, Thomas; Murnane, Darragh; Symons, Digby

    2017-08-30

    In dry powder inhalers (DPIs) the patient's inhalation manoeuvre strongly influences the release of drug. Drug release from a DPI may also be influenced by the size of any air bypass incorporated in the device. If the amount of bypass is high less air flows through the entrainment geometry and the release rate is lower. In this study we propose to reduce the intra- and inter-patient variations of drug release by controlling the amount of air bypass in a DPI. A fast computational method is proposed that can predict how much bypass is needed for a specified drug delivery rate for a particular patient. This method uses a meta-model which was constructed using multiphase computational fluid dynamic (CFD) simulations. The meta-model is applied in an optimization framework to predict the required amount of bypass needed for drug delivery that is similar to a desired target release behaviour. The meta-model was successfully validated by comparing its predictions to results from additional CFD simulations. The optimization framework has been applied to identify the optimal amount of bypass needed for fictitious sample inhalation manoeuvres in order to deliver a target powder release profile for two patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  5. A junction-tree based learning algorithm to optimize network wide traffic control: A coordinated multi-agent framework

    DOE PAGES

    Zhu, Feng; Aziz, H. M. Abdul; Qian, Xinwu; ...

    2015-01-31

    Our study develops a novel reinforcement learning algorithm for the challenging coordinated signal control problem. Traffic signals are modeled as intelligent agents interacting with the stochastic traffic environment. The model is built on the framework of coordinated reinforcement learning. The Junction Tree Algorithm (JTA) based reinforcement learning is proposed to obtain an exact inference of the best joint actions for all the coordinated intersections. Moreover, the algorithm is implemented and tested with a network containing 18 signalized intersections in VISSIM. Finally, our results show that the JTA based algorithm outperforms independent learning (Q-learning), real-time adaptive learning, and fixed timing plansmore » in terms of average delay, number of stops, and vehicular emissions at the network level.« less

  6. Localized modelling and feedback control of linear instabilities in 2-D wall bounded shear flows

    NASA Astrophysics Data System (ADS)

    Tol, Henry; Kotsonis, Marios; de Visser, Coen

    2016-11-01

    A new approach is presented for control of instabilities in 2-D wall bounded shear flows described by the linearized Navier-Stokes equations (LNSE). The control design accounts both for spatially localized actuators/sensors and the dominant perturbation dynamics in an optimal control framework. An inflow disturbance model is proposed for streamwise instabilities that drive laminar-turbulent transition. The perturbation modes that contribute to the transition process can be selected and are included in the control design. A reduced order model is derived from the LNSE that captures the input-output behavior and the dominant perturbation dynamics. This model is used to design an optimal controller for suppressing the instability growth. A 2-D channel flow and a 2-D boundary layer flow over a flat plate are considered as application cases. Disturbances are generated upstream of the control domain and the resulting flow perturbations are estimated/controlled using wall shear measurements and localized unsteady blowing and suction at the wall. It will be shown that the controller is able to cancel the perturbations and is robust to unmodelled disturbances.

  7. A mixed methods approach to assess animal vaccination programmes: The case of rabies control in Bamako, Mali.

    PubMed

    Mosimann, Laura; Traoré, Abdallah; Mauti, Stephanie; Léchenne, Monique; Obrist, Brigit; Véron, René; Hattendorf, Jan; Zinsstag, Jakob

    2017-01-01

    In the framework of the research network on integrated control of zoonoses in Africa (ICONZ) a dog rabies mass vaccination campaign was carried out in two communes of Bamako (Mali) in September 2014. A mixed method approach, combining quantitative and qualitative tools, was developed to evaluate the effectiveness of the intervention towards optimization for future scale-up. Actions to control rabies occur on one level in households when individuals take the decision to vaccinate their dogs. However, control also depends on provision of vaccination services and community participation at the intermediate level of social resilience. Mixed methods seem necessary as the problem-driven transdisciplinary project includes epidemiological components in addition to social dynamics and cultural, political and institutional issues. Adapting earlier effectiveness models for health intervention to rabies control, we propose a mixed method assessment of individual effectiveness parameters like availability, affordability, accessibility, adequacy or acceptability. Triangulation of quantitative methods (household survey, empirical coverage estimation and spatial analysis) with qualitative findings (participant observation, focus group discussions) facilitate a better understanding of the weight of each effectiveness determinant, and the underlying reasons embedded in the local understandings, cultural practices, and social and political realities of the setting. Using this method, a final effectiveness of 33% for commune Five and 28% for commune Six was estimated, with vaccination coverage of 27% and 20%, respectively. Availability was identified as the most sensitive effectiveness parameter, attributed to lack of information about the campaign. We propose a mixed methods approach to optimize intervention design, using an "intervention effectiveness optimization cycle" with the aim of maximizing effectiveness. Empirical vaccination coverage estimation is compared to the effectiveness model with its determinants. In addition, qualitative data provide an explanatory framework for deeper insight, validation and interpretation of results which should improve the intervention design while involving all stakeholders and increasing community participation. This work contributes vital information for the optimization and scale-up of future vaccination campaigns in Bamako, Mali. The proposed mixed method, although incompletely applied in this case study, should be applicable to similar rabies interventions targeting elimination in other settings. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Assessing concentrations and health impacts of air quality management strategies: Framework for Rapid Emissions Scenario and Health impact ESTimation (FRESH-EST).

    PubMed

    Milando, Chad W; Martenies, Sheena E; Batterman, Stuart A

    2016-09-01

    In air quality management, reducing emissions from pollutant sources often forms the primary response to attaining air quality standards and guidelines. Despite the broad success of air quality management in the US, challenges remain. As examples: allocating emissions reductions among multiple sources is complex and can require many rounds of negotiation; health impacts associated with emissions, the ultimate driver for the standards, are not explicitly assessed; and long dispersion model run-times, which result from the increasing size and complexity of model inputs, limit the number of scenarios that can be evaluated, thus increasing the likelihood of missing an optimal strategy. A new modeling framework, called the "Framework for Rapid Emissions Scenario and Health impact ESTimation" (FRESH-EST), is presented to respond to these challenges. FRESH-EST estimates concentrations and health impacts of alternative emissions scenarios at the urban scale, providing efficient computations from emissions to health impacts at the Census block or other desired spatial scale. In addition, FRESH-EST can optimize emission reductions to meet specified environmental and health constraints, and a convenient user interface and graphical displays are provided to facilitate scenario evaluation. The new framework is demonstrated in an SO2 non-attainment area in southeast Michigan with two optimization strategies: the first minimizes emission reductions needed to achieve a target concentration; the second minimizes concentrations while holding constant the cumulative emissions across local sources (e.g., an emissions floor). The optimized strategies match outcomes in the proposed SO2 State Implementation Plan without the proposed stack parameter modifications or shutdowns. In addition, the lower health impacts estimated for these strategies suggest that FRESH-EST could be used to identify potentially more desirable pollution control alternatives in air quality management planning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Interactive Genetic Algorithm - An Adaptive and Interactive Decision Support Framework for Design of Optimal Groundwater Monitoring Plans

    NASA Astrophysics Data System (ADS)

    Babbar-Sebens, M.; Minsker, B. S.

    2006-12-01

    In the water resources management field, decision making encompasses many kinds of engineering, social, and economic constraints and objectives. Representing all of these problem dependant criteria through models (analytical or numerical) and various formulations (e.g., objectives, constraints, etc.) within an optimization- simulation system can be a very non-trivial issue. Most models and formulations utilized for discerning desirable traits in a solution can only approximate the decision maker's (DM) true preference criteria, and they often fail to consider important qualitative and incomputable phenomena related to the management problem. In our research, we have proposed novel decision support frameworks that allow DMs to actively participate in the optimization process. The DMs explicitly indicate their true preferences based on their subjective criteria and the results of various simulation models and formulations. The feedback from the DMs is then used to guide the search process towards solutions that are "all-rounders" from the perspective of the DM. The two main research questions explored in this work are: a) Does interaction between the optimization algorithm and a DM assist the system in searching for groundwater monitoring designs that are robust from the DM's perspective?, and b) How can an interactive search process be made more effective when human factors, such as human fatigue and cognitive learning processes, affect the performance of the algorithm? The application of these frameworks on a real-world groundwater long-term monitoring (LTM) case study in Michigan highlighted the following salient advantages: a) in contrast to the non-interactive optimization methodology, the proposed interactive frameworks were able to identify low cost monitoring designs whose interpolation maps respected the expected spatial distribution of the contaminants, b) for many same-cost designs, the interactive methodologies were able to propose multiple alternatives that met the DM's preference criteria, therefore allowing the expert to select among several strong candidate designs depending on her/his LTM budget, c) two of the methodologies - Case-Based Micro Interactive Genetic Algorithm (CBMIGA) and Interactive Genetic Algorithm with Mixed Initiative Interaction (IGAMII) - were also able to assist in controlling human fatigue and adapt to the DM's learning process.

  10. Beyond optimality: Multistakeholder robustness tradeoffs for regional water portfolio planning under deep uncertainty

    NASA Astrophysics Data System (ADS)

    Herman, Jonathan D.; Zeff, Harrison B.; Reed, Patrick M.; Characklis, Gregory W.

    2014-10-01

    While optimality is a foundational mathematical concept in water resources planning and management, "optimal" solutions may be vulnerable to failure if deeply uncertain future conditions deviate from those assumed during optimization. These vulnerabilities may produce severely asymmetric impacts across a region, making it vital to evaluate the robustness of management strategies as well as their impacts for regional stakeholders. In this study, we contribute a multistakeholder many-objective robust decision making (MORDM) framework that blends many-objective search and uncertainty analysis tools to discover key tradeoffs between water supply alternatives and their robustness to deep uncertainties (e.g., population pressures, climate change, and financial risks). The proposed framework is demonstrated for four interconnected water utilities representing major stakeholders in the "Research Triangle" region of North Carolina, U.S. The utilities supply well over one million customers and have the ability to collectively manage drought via transfer agreements and shared infrastructure. We show that water portfolios for this region that compose optimal tradeoffs (i.e., Pareto-approximate solutions) under expected future conditions may suffer significantly degraded performance with only modest changes in deeply uncertain hydrologic and economic factors. We then use the Patient Rule Induction Method (PRIM) to identify which uncertain factors drive the individual and collective vulnerabilities for the four cooperating utilities. Our framework identifies key stakeholder dependencies and robustness tradeoffs associated with cooperative regional planning, which are critical to understanding the tensions between individual versus regional water supply goals. Cooperative demand management was found to be the key factor controlling the robustness of regional water supply planning, dominating other hydroclimatic and economic uncertainties through the 2025 planning horizon. Results suggest that a modest reduction in the projected rate of demand growth (from approximately 3% per year to 2.4%) will substantially improve the utilities' robustness to future uncertainty and reduce the potential for regional tensions. The proposed multistakeholder MORDM framework offers critical insights into the risks and challenges posed by rising water demands and hydrological uncertainties, providing a planning template for regions now forced to confront rapidly evolving water scarcity risks.

  11. A dynamic modelling framework towards the solution of reduction in smoking prevalence

    NASA Astrophysics Data System (ADS)

    Halim, Tisya Farida Abdul; Sapiri, Hasimah; Abidin, Norhaslinda Zainal

    2016-10-01

    This paper presents a hypothetical framework towards the solution for reduction in smoking prevalence in Malaysia. The framework is design to assist in decision making process related to reduction in smoking prevalence using SD and OCT. In general, this framework is developed using SD approach where OCT is embedded in the policy evaluation process. Smoking prevalence is one of the determinant which plays an important role in measuring a successful implementation of anti-smoking strategies. Therefore, it is critical to determine the optimal value of smoking prevalence in order to trim down the hazardous effects of smoking to society. Conversely, smoking problem becomes increasingly complex since many issues that ranged from behavioral to economical need to be considered simultaneously. Thus, a hypothetical framework of the control model embedded in the SD methodology is expected to obtain the minimum value of smoking prevalence which the output in turn will provide a guideline for tobacco researchers as well as decision makers for policy design and evaluation.

  12. Structural Technology Evaluation and Analysis Program (STEAP). Delivery Order 0037: Prognosis-Based Control Reconfiguration for an Aircraft with Faulty Actuator to Enable Performance in a Degraded State

    DTIC Science & Technology

    2010-12-01

    computers in 1953. HIL motion simulators were also built for the dynamic testing of vehicle com- ponents (e.g. suspensions, bodies ) with hydraulic or...complex, comprehensive mechanical systems can be simulated in real-time by parallel computers; examples include multi- body sys- tems, brake systems...hard constraints in a multivariable control framework. And the third aspect is the ability to perform online optimization. These aspects results in

  13. Training trajectories by continuous recurrent multilayer networks.

    PubMed

    Leistritz, L; Galicki, M; Witte, H; Kochs, E

    2002-01-01

    This paper addresses the problem of training trajectories by means of continuous recurrent neural networks whose feedforward parts are multilayer perceptrons. Such networks can approximate a general nonlinear dynamic system with arbitrary accuracy. The learning process is transformed into an optimal control framework where the weights are the controls to be determined. A training algorithm based upon a variational formulation of Pontryagin's maximum principle is proposed for such networks. Computer examples demonstrating the efficiency of the given approach are also presented.

  14. Design of Distributed Controllers Seeking Optimal Power Flow Solutions Under Communication Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltagemore » measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.« less

  15. Design of Distributed Controllers Seeking Optimal Power Flow Solutions under Communication Constraints: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltagemore » measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.« less

  16. Optimal control of an invasive species using a reaction-diffusion model and linear programming

    USGS Publications Warehouse

    Bonneau, Mathieu; Johnson, Fred A.; Smith, Brian J.; Romagosa, Christina M.; Martin, Julien; Mazzotti, Frank J.

    2017-01-01

    Managing an invasive species is particularly challenging as little is generally known about the species’ biological characteristics in its new habitat. In practice, removal of individuals often starts before the species is studied to provide the information that will later improve control. Therefore, the locations and the amount of control have to be determined in the face of great uncertainty about the species characteristics and with a limited amount of resources. We propose framing spatial control as a linear programming optimization problem. This formulation, paired with a discrete reaction-diffusion model, permits calculation of an optimal control strategy that minimizes the remaining number of invaders for a fixed cost or that minimizes the control cost for containment or protecting specific areas from invasion. We propose computing the optimal strategy for a range of possible model parameters, representing current uncertainty on the possible invasion scenarios. Then, a best strategy can be identified depending on the risk attitude of the decision-maker. We use this framework to study the spatial control of the Argentine black and white tegus (Salvator merianae) in South Florida. There is uncertainty about tegu demography and we considered several combinations of model parameters, exhibiting various dynamics of invasion. For a fixed one-year budget, we show that the risk-averse strategy, which optimizes the worst-case scenario of tegus’ dynamics, and the risk-neutral strategy, which optimizes the expected scenario, both concentrated control close to the point of introduction. A risk-seeking strategy, which optimizes the best-case scenario, focuses more on models where eradication of the species in a cell is possible and consists of spreading control as much as possible. For the establishment of a containment area, assuming an exponential growth we show that with current control methods it might not be possible to implement such a strategy for some of the models that we considered. Including different possible models allows an examination of how the strategy is expected to perform in different scenarios. Then, a strategy that accounts for the risk attitude of the decision-maker can be designed.

  17. Multilevel Optimization Framework for Hierarchical Stiffened Shells Accelerated by Adaptive Equivalent Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong

    2017-06-01

    In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.

  18. Fast cooling for a system of stochastic oscillators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yongxin, E-mail: chen2468@umn.edu; Georgiou, Tryphon T., E-mail: tryphon@umn.edu; Pavon, Michele, E-mail: pavon@math.unipd.it

    2015-11-15

    We study feedback control of coupled nonlinear stochastic oscillators in a force field. We first consider the problem of asymptotically driving the system to a desired steady state corresponding to reduced thermal noise. Among the feedback controls achieving the desired asymptotic transfer, we find that the most efficient one from an energy point of view is characterized by time-reversibility. We also extend the theory of Schrödinger bridges to this model, thereby steering the system in finite time and with minimum effort to a target steady-state distribution. The system can then be maintained in this state through the optimal steady-state feedbackmore » control. The solution, in the finite-horizon case, involves a space-time harmonic function φ, and −logφ plays the role of an artificial, time-varying potential in which the desired evolution occurs. This framework appears extremely general and flexible and can be viewed as a considerable generalization of existing active control strategies such as macromolecular cooling. In the case of a quadratic potential, the results assume a form particularly attractive from the algorithmic viewpoint as the optimal control can be computed via deterministic matricial differential equations. An example involving inertial particles illustrates both transient and steady state optimal feedback control.« less

  19. Hollow carbon nanobubbles: monocrystalline MOF nanobubbles and their pyrolysis.

    PubMed

    Zhang, Wei; Jiang, Xiangfen; Zhao, Yanyi; Carné-Sánchez, Arnau; Malgras, Victor; Kim, Jeonghun; Kim, Jung Ho; Wang, Shaobin; Liu, Jian; Jiang, Ji-Sen; Yamauchi, Yusuke; Hu, Ming

    2017-05-01

    While bulk-sized metal-organic frameworks (MOFs) face limits to their utilization in various research fields such as energy storage applications, nanoarchitectonics is believed to be a possible solution. It is highly challenging to realize MOF nanobubbles with monocrystalline frameworks. By a spatially controlled etching approach, here, we can achieve the synthesis of zeolitic imidazolate framework (ZIF-8) nanobubbles with a uniform size of less than 100 nm. Interestingly, the ZIF-8 nanobubbles possess a monocrystalline nanoshell with a thickness of around 10 nm. Under optimal pyrolytic conditions, the ZIF-8 nanobubbles can be converted into hollow carbon nanobubbles while keeping their original shapes. The structure of the nanobubble enhances the fast Na + /K + ion intercalation performance. Such remarkable improvement cannot be realized by conventional MOFs or their derived carbons.

  20. Coarse-graining errors and numerical optimization using a relative entropy framework.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2011-03-07

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.

  1. An adaptive-management framework for optimal control of hiking near golden eagle nests in Denali National Park

    USGS Publications Warehouse

    Martin, Julien; Fackler, Paul L.; Nichols, James D.; Runge, Michael C.; McIntyre, Carol L.; Lubow, Bruce L.; McCluskie, Maggie C.; Schmutz, Joel A.

    2011-01-01

    Unintended effects of recreational activities in protected areas are of growing concern. We used an adaptive-management framework to develop guidelines for optimally managing hiking activities to maintain desired levels of territory occupancy and reproductive success of Golden Eagles (Aquila chrysaetos) in Denali National Park (Alaska, U.S.A.). The management decision was to restrict human access (hikers) to particular nesting territories to reduce disturbance. The management objective was to minimize restrictions on hikers while maintaining reproductive performance of eagles above some specified level. We based our decision analysis on predictive models of site occupancy of eagles developed using a combination of expert opinion and data collected from 93 eagle territories over 20 years. The best predictive model showed that restricting human access to eagle territories had little effect on occupancy dynamics. However, when considering important sources of uncertainty in the models, including environmental stochasticity, imperfect detection of hares on which eagles prey, and model uncertainty, restricting access of territories to hikers improved eagle reproduction substantially. An adaptive management framework such as ours may help reduce uncertainty of the effects of hiking activities on Golden Eagles

  2. Optimal Power Flow for Distribution Systems under Uncertain Forecasts: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    2016-12-01

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  3. Closed-loop neuromodulation of spinal sensorimotor circuits controls refined locomotion after complete spinal cord injury.

    PubMed

    Wenger, Nikolaus; Moraud, Eduardo Martin; Raspopovic, Stanisa; Bonizzato, Marco; DiGiovanna, Jack; Musienko, Pavel; Morari, Manfred; Micera, Silvestro; Courtine, Grégoire

    2014-09-24

    Neuromodulation of spinal sensorimotor circuits improves motor control in animal models and humans with spinal cord injury. With common neuromodulation devices, electrical stimulation parameters are tuned manually and remain constant during movement. We developed a mechanistic framework to optimize neuromodulation in real time to achieve high-fidelity control of leg kinematics during locomotion in rats. We first uncovered relationships between neuromodulation parameters and recruitment of distinct sensorimotor circuits, resulting in predictive adjustments of leg kinematics. Second, we established a technological platform with embedded control policies that integrated robust movement feedback and feed-forward control loops in real time. These developments allowed us to conceive a neuroprosthetic system that controlled a broad range of foot trajectories during continuous locomotion in paralyzed rats. Animals with complete spinal cord injury performed more than 1000 successive steps without failure, and were able to climb staircases of various heights and lengths with precision and fluidity. Beyond therapeutic potential, these findings provide a conceptual and technical framework to personalize neuromodulation treatments for other neurological disorders. Copyright © 2014, American Association for the Advancement of Science.

  4. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    PubMed

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  5. Optimal Land Use Management for Soil Erosion Control by Using an Interval-Parameter Fuzzy Two-Stage Stochastic Programming Approach

    NASA Astrophysics Data System (ADS)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Baker, Kyri; Summers, Tyler

    The paper focuses on distribution systems featuring renewable energy sources and energy storage devices, and develops an optimal power flow (OPF) approach to optimize the system operation in spite of forecasting errors. The proposed method builds on a chance-constrained multi-period AC OPF formulation, where probabilistic constraints are utilized to enforce voltage regulation with a prescribed probability. To enable a computationally affordable solution approach, a convex reformulation of the OPF task is obtained by resorting to i) pertinent linear approximations of the power flow equations, and ii) convex approximations of the chance constraints. Particularly, the approximate chance constraints provide conservative boundsmore » that hold for arbitrary distributions of the forecasting errors. An adaptive optimization strategy is then obtained by embedding the proposed OPF task into a model predictive control framework.« less

  7. Adaptive effort investment in cognitive and physical tasks: a neurocomputational model

    PubMed Central

    Verguts, Tom; Vassena, Eliana; Silvetti, Massimo

    2015-01-01

    Despite its importance in everyday life, the computational nature of effort investment remains poorly understood. We propose an effort model obtained from optimality considerations, and a neurocomputational approximation to the optimal model. Both are couched in the framework of reinforcement learning. It is shown that choosing when or when not to exert effort can be adaptively learned, depending on rewards, costs, and task difficulty. In the neurocomputational model, the limbic loop comprising anterior cingulate cortex (ACC) and ventral striatum in the basal ganglia allocates effort to cortical stimulus-action pathways whenever this is valuable. We demonstrate that the model approximates optimality. Next, we consider two hallmark effects from the cognitive control literature, namely proportion congruency and sequential congruency effects. It is shown that the model exerts both proactive and reactive cognitive control. Then, we simulate two physical effort tasks. In line with empirical work, impairing the model's dopaminergic pathway leads to apathetic behavior. Thus, we conceptually unify the exertion of cognitive and physical effort, studied across a variety of literatures (e.g., motivation and cognitive control) and animal species. PMID:25805978

  8. Emerging technology for advancing the treatment of epilepsy using a dynamic control framework.

    PubMed

    Stanslaski, Scott; Giftakis, John; Stypulkowski, Paul; Carlson, Dave; Afshar, Pedram; Cong, Peng; Denison, Timothy

    2011-01-01

    We briefly describe a dynamic control system framework for neuromodulation for epilepsy, with an emphasis on its practical challenges and the preliminary validation of key prototype technologies in a chronic animal model. The current state of neuromodulation can be viewed as a classical dynamic control framework such that the nervous system is the classical "plant", the neural stimulator is the controller/actuator, clinical observation, patient diaries and/or measured bio-markers are the sensor, and clinical judgment applied to these sensor inputs forms the state estimator. Technology can potentially address two main factors contributing to the performance limitations of existing systems: "observability," the ability to observe the state of the system from output measurements, and "controllability," the ability to drive the system to a desired state. In addition to improving sensors and actuator performance, methods and tools to better understand disease state dynamics and state estimation are also critical for improving therapy outcomes. We describe our preliminary validation of key "observability" and "controllability" technology blocks using an implanted research tool in an epilepsy disease model. This model allows for testing the key emerging technologies in a representative neural network of therapeutic importance. In the future, we believe these technologies might enable both first principles understanding of neural network behavior for optimizing therapy design, and provide a practical pathway towards clinical translation.

  9. Teaching excellence in nursing education: a caring framework.

    PubMed

    Sawatzky, Jo-Ann V; Enns, Carol L; Ashcroft, Terri J; Davis, Penny L; Harder, B Nicole

    2009-01-01

    Nursing education plays a central role in the ability to practice effectively. It follows that an optimally educated nursing workforce begets optimal patient care. A framework for excellence in nursing education could guide the development of novice educators, establish the basis for evaluating teaching excellence, and provide the impetus for research in this area. However, a review of the social sciences and nursing literature as well as a search for existing models for teaching excellence revealed an apparent dearth of evidence specific to excellence in nursing education. Therefore, we developed the Caring Framework for Excellence in Nursing Education. This framework evolved from a review of the generic constructs that exemplify teaching excellence: excellence in teaching practice, teaching scholarship, and teaching leadership. Nursing is grounded in the ethic of caring. Hence, caring establishes the foundation for this uniquely nursing framework. Because a teaching philosophy is intimately intertwined with one's nursing philosophy and the ethic of caring, it is also fundamental to the caring framework. Ideally, this framework will contribute to excellence in nursing education and as a consequence excellence in nursing practice and optimal patient care.

  10. Full-order optimal compensators for flow control: the multiple inputs case

    NASA Astrophysics Data System (ADS)

    Semeraro, Onofrio; Pralits, Jan O.

    2018-03-01

    Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.

  11. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE PAGES

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    2017-11-09

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  12. Sequential-Optimization-Based Framework for Robust Modeling and Design of Heterogeneous Catalytic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos

    Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less

  13. An optimization-based framework for anisotropic simplex mesh adaptation

    NASA Astrophysics Data System (ADS)

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  14. Neural signal processing and closed-loop control algorithm design for an implanted neural recording and stimulation system.

    PubMed

    Hamilton, Lei; McConley, Marc; Angermueller, Kai; Goldberg, David; Corba, Massimiliano; Kim, Louis; Moran, James; Parks, Philip D; Sang Chin; Widge, Alik S; Dougherty, Darin D; Eskandar, Emad N

    2015-08-01

    A fully autonomous intracranial device is built to continually record neural activities in different parts of the brain, process these sampled signals, decode features that correlate to behaviors and neuropsychiatric states, and use these features to deliver brain stimulation in a closed-loop fashion. In this paper, we describe the sampling and stimulation aspects of such a device. We first describe the signal processing algorithms of two unsupervised spike sorting methods. Next, we describe the LFP time-frequency analysis and feature derivation from the two spike sorting methods. Spike sorting includes a novel approach to constructing a dictionary learning algorithm in a Compressed Sensing (CS) framework. We present a joint prediction scheme to determine the class of neural spikes in the dictionary learning framework; and, the second approach is a modified OSort algorithm which is implemented in a distributed system optimized for power efficiency. Furthermore, sorted spikes and time-frequency analysis of LFP signals can be used to generate derived features (including cross-frequency coupling, spike-field coupling). We then show how these derived features can be used in the design and development of novel decode and closed-loop control algorithms that are optimized to apply deep brain stimulation based on a patient's neuropsychiatric state. For the control algorithm, we define the state vector as representative of a patient's impulsivity, avoidance, inhibition, etc. Controller parameters are optimized to apply stimulation based on the state vector's current state as well as its historical values. The overall algorithm and software design for our implantable neural recording and stimulation system uses an innovative, adaptable, and reprogrammable architecture that enables advancement of the state-of-the-art in closed-loop neural control while also meeting the challenges of system power constraints and concurrent development with ongoing scientific research designed to define brain network connectivity and neural network dynamics that vary at the individual patient level and vary over time.

  15. Flexible real-time magnetic resonance imaging framework.

    PubMed

    Santos, Juan M; Wright, Graham A; Pauly, John M

    2004-01-01

    The extension of MR imaging to new applications has demonstrated the limitations of the architecture of current real-time systems. Traditional real-time implementations provide continuous acquisition of data and modification of basic sequence parameters on the fly. We have extended the concept of real-time MRI by designing a system that drives the examinations from a real-time localizer and then gets reconfigured for different imaging modes. Upon operator request or automatic feedback the system can immediately generate a new pulse sequence or change fundamental aspects of the acquisition such as gradient waveforms excitation pulses and scan planes. This framework has been implemented by connecting a data processing and control workstation to a conventional clinical scanner. Key components on the design of this framework are the data communication and control mechanisms, reconstruction algorithms optimized for real-time and adaptability, flexible user interface and extensible user interaction. In this paper we describe the various components that comprise this system. Some of the applications implemented in this framework include real-time catheter tracking embedded in high frame rate real-time imaging and immediate switching between real-time localizer and high-resolution volume imaging for coronary angiography applications.

  16. Minimizing the health and climate impacts of emissions from heavy-duty public transportation bus fleets through operational optimization.

    PubMed

    Gouge, Brian; Dowlatabadi, Hadi; Ries, Francis J

    2013-04-16

    In contrast to capital control strategies (i.e., investments in new technology), the potential of operational control strategies (e.g., vehicle scheduling optimization) to reduce the health and climate impacts of the emissions from public transportation bus fleets has not been widely considered. This case study demonstrates that heterogeneity in the emission levels of different bus technologies and the exposure potential of bus routes can be exploited though optimization (e.g., how vehicles are assigned to routes) to minimize these impacts as well as operating costs. The magnitude of the benefits of the optimization depend on the specific transit system and region. Health impacts were found to be particularly sensitive to different vehicle assignments and ranged from worst to best case assignment by more than a factor of 2, suggesting there is significant potential to reduce health impacts. Trade-offs between climate, health, and cost objectives were also found. Transit agencies that do not consider these objectives in an integrated framework and, for example, optimize for costs and/or climate impacts alone, risk inadvertently increasing health impacts by as much as 49%. Cost-benefit analysis was used to evaluate trade-offs between objectives, but large uncertainties make identifying an optimal solution challenging.

  17. The Discounted Method and Equivalence of Average Criteria for Risk-Sensitive Markov Decision Processes on Borel Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m

    2010-04-15

    This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less

  18. Singular perturbation techniques for real time aircraft trajectory optimization and control

    NASA Technical Reports Server (NTRS)

    Calise, A. J.; Moerder, D. D.

    1982-01-01

    The usefulness of singular perturbation methods for developing real time computer algorithms to control and optimize aircraft flight trajectories is examined. A minimum time intercept problem using F-8 aerodynamic and propulsion data is used as a baseline. This provides a framework within which issues relating to problem formulation, solution methodology and real time implementation are examined. Theoretical questions relating to separability of dynamics are addressed. With respect to implementation, situations leading to numerical singularities are identified, and procedures for dealing with them are outlined. Also, particular attention is given to identifying quantities that can be precomputed and stored, thus greatly reducing the on-board computational load. Numerical results are given to illustrate the minimum time algorithm, and the resulting flight paths. An estimate is given for execution time and storage requirements.

  19. A single sensor and single actuator approach to performance tailoring over a prescribed frequency band.

    PubMed

    Wang, Jiqiang

    2016-03-01

    Restricted sensing and actuation control represents an important area of research that has been overlooked in most of the design methodologies. In many practical control engineering problems, it is necessitated to implement the design through a single sensor and single actuator for multivariate performance variables. In this paper, a novel approach is proposed for the solution to the single sensor and single actuator control problem where performance over any prescribed frequency band can also be tailored. The results are obtained for the broad band control design based on the formulation for discrete frequency control. It is shown that the single sensor and single actuator control problem over a frequency band can be cast into a Nevanlinna-Pick interpolation problem. An optimal controller can then be obtained via the convex optimization over LMIs. Even remarkable is that robustness issues can also be tackled in this framework. A numerical example is provided for the broad band attenuation of rotor blade vibration to illustrate the proposed design procedures. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Spatially-Distributed Cost–Effectiveness Analysis Framework to Control Phosphorus from Agricultural Diffuse Pollution

    PubMed Central

    Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N.; Meng, Fande

    2015-01-01

    Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program–FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ‘‘best approach” depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds. PMID:26313561

  1. Spatially-Distributed Cost-Effectiveness Analysis Framework to Control Phosphorus from Agricultural Diffuse Pollution.

    PubMed

    Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N; Meng, Fande

    2015-01-01

    Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program-FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ''best approach" depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds.

  2. Modeling and control of operator functional state in a unified framework of fuzzy inference petri nets.

    PubMed

    Zhang, Jian-Hua; Xia, Jia-Jun; Garibaldi, Jonathan M; Groumpos, Petros P; Wang, Ru-Bin

    2017-06-01

    In human-machine (HM) hybrid control systems, human operator and machine cooperate to achieve the control objectives. To enhance the overall HM system performance, the discrete manual control task-load by the operator must be dynamically allocated in accordance with continuous-time fluctuation of psychophysiological functional status of the operator, so-called operator functional state (OFS). The behavior of the HM system is hybrid in nature due to the co-existence of discrete task-load (control) variable and continuous operator performance (system output) variable. Petri net is an effective tool for modeling discrete event systems, but for hybrid system involving discrete dynamics, generally Petri net model has to be extended. Instead of using different tools to represent continuous and discrete components of a hybrid system, this paper proposed a method of fuzzy inference Petri nets (FIPN) to represent the HM hybrid system comprising a Mamdani-type fuzzy model of OFS and a logical switching controller in a unified framework, in which the task-load level is dynamically reallocated between the operator and machine based on the model-predicted OFS. Furthermore, this paper used a multi-model approach to predict the operator performance based on three electroencephalographic (EEG) input variables (features) via the Wang-Mendel (WM) fuzzy modeling method. The membership function parameters of fuzzy OFS model for each experimental participant were optimized using artificial bee colony (ABC) evolutionary algorithm. Three performance indices, RMSE, MRE, and EPR, were computed to evaluate the overall modeling accuracy. Experiment data from six participants are analyzed. The results show that the proposed method (FIPN with adaptive task allocation) yields lower breakdown rate (from 14.8% to 3.27%) and higher human performance (from 90.30% to 91.99%). The simulation results of the FIPN-based adaptive HM (AHM) system on six experimental participants demonstrate that the FIPN framework provides an effective way to model and regulate/optimize the OFS in HM hybrid systems composed of continuous-time OFS model and discrete-event switching controller. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. ProjectQ Software Framework

    NASA Astrophysics Data System (ADS)

    Steiger, Damian S.; Haener, Thomas; Troyer, Matthias

    Quantum computers promise to transform our notions of computation by offering a completely new paradigm. A high level quantum programming language and optimizing compilers are essential components to achieve scalable quantum computation. In order to address this, we introduce the ProjectQ software framework - an open source effort to support both theorists and experimentalists by providing intuitive tools to implement and run quantum algorithms. Here, we present our ProjectQ quantum compiler, which compiles a quantum algorithm from our high-level Python-embedded language down to low-level quantum gates available on the target system. We demonstrate how this compiler can be used to control actual hardware and to run high-performance simulations.

  4. Toward city-scale water quality control: building a theory for smart stormwater systems

    NASA Astrophysics Data System (ADS)

    Kerkez, B.; Mullapudi, A. M.; Wong, B. P.

    2016-12-01

    Urban stormwater systems are rarely designed as actual systems. Rather, it is often assumed that individual Best Management Practices (BMPs) will add up to achieve desired watershed outcomes. Given the rise of BMPs and green infrastructure, we ask: does doing "best" at the local scale guarantee the "best" at the global scale? Existing studies suggest that the system-level performance of distributed stormwater practices may actually adversely impact watersheds by increasing downstream erosion and reducing water quality. Optimizing spatial placement may not be sufficient, however, since precipitation variability and other sources of uncertainty can drive the overall system into undesirable states. To that end, it is also important to control the temporal behavior of the system, which can be achieved by equipping stormwater elements (ponds, wetlands, basins, bioswales, etc.) with "smart" sensors and valves. Rather than building new infrastructure, this permits for existing assets to be repurposed and controlled to adapt to individual storm events. While we have learned how to build and deploy the necessary sensing and control technologies, we do not have a framework or theory that combines our knowledge of hydrology, hydraulics, water quality and control. We discuss the development of such a framework and investigate how existing water domain knowledge can be transferred into a system-theoretic context to enable real-time, city-scale stormwater control. We apply this framework to water quality control in an urban watershed in southeast Michigan, which has been heavily instrumented and retrofitted for control over the past year.

  5. A Cost Comparison Framework for Use in Optimizing Ground Water Pump and Treat Systems

    EPA Pesticide Factsheets

    This fact sheet has been prepared to provide a framework for conducting cost comparisons to evaluate whether or not to pursue potential opportunities from an optimization evaluation for improving, replacing, or supplementing the P&T system.

  6. Delivery of meaningful cancer care: a retrospective cohort study assessing cost and benefit with the ASCO and ESMO frameworks.

    PubMed

    Del Paggio, Joseph C; Sullivan, Richard; Schrag, Deborah; Hopman, Wilma M; Azariah, Biju; Pramesh, C S; Tannock, Ian F; Booth, Christopher M

    2017-07-01

    The American Society of Clinical Oncology (ASCO) and the European Society for Medical Oncology (ESMO) have developed frameworks that quantify survival gains in light of toxicity and quality of life to assess the benefits of cancer therapies. We applied these frameworks to a cohort of contemporary randomised controlled trials to explore agreement between the two approaches and to assess the relation between treatment benefit and cost. We identified all randomised controlled trials of systemic therapies in non-small-cell lung cancer, breast cancer, colorectal cancer, and pancreatic cancer published between Jan 1, 2011, and Dec 31, 2015, and assessed their abstracts and methods. Trials were eligible for inclusion in our cohort if significant differences favouring the experimental group in a prespecified primary or secondary outcome were reported (secondary outcomes were assessed only if primary outcomes were not significant). We assessed trial endpoints with the ASCO and ESMO frameworks at two timepoints 3 months apart to confirm intra-rater reliability. Cohen's κ statistic was calculated to establish agreement between the two frameworks on the basis of the median ASCO score, which was used as an arbitrary threshold of benefit, and the framework-recommended ESMO threshold. Differences in monthly drug cost between the experimental and control groups of each randomised controlled trial (ie, incremental drug cost) were derived from 2016 average wholesale prices. 109 randomised controlled trials were eligible for inclusion, 42 (39%) in non-small-cell lung cancer, 36 (33%) in breast cancer, 25 (23%) in colorectal cancer, and six (6%) in pancreatic cancer. ASCO scores ranged from 2 to 77; median score was 25 (IQR 16-35). 41 (38%) trials met the benefit thresholds in the ESMO framework. Agreement between the two frameworks was fair (κ=0·326). Among the 100 randomised controlled trials for which drug costing data were available, ASCO benefit score and monthly incremental drug costs were negatively correlated (ρ=-0·207; p=0·039). Treatments that met ESMO benefit thresholds had a lower median incremental drug cost than did those that did not meet benefit thresholds (US$2981 [IQR 320-9059] vs $8621 [1174-13 930]; p=0·018). There is only fair correlation between these two major value care frameworks, and negative correlations between framework outputs and drug costs. Delivery of optimal cancer care in a sustainable health system will necessitate future oncologists, investigators, and policy makers to reconcile the disconnect between drug cost and clinical benefit. None. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. The multidisciplinary design optimization of a distributed propulsion blended-wing-body aircraft

    NASA Astrophysics Data System (ADS)

    Ko, Yan-Yee Andy

    The purpose of this study is to examine the multidisciplinary design optimization (MDO) of a distributed propulsion blended-wing-body (BWB) aircraft. The BWB is a hybrid shape resembling a flying wing, placing the payload in the inboard sections of the wing. The distributed propulsion concept involves replacing a small number of large engines with many smaller engines. The distributed propulsion concept considered here ducts part of the engine exhaust to exit out along the trailing edge of the wing. The distributed propulsion concept affects almost every aspect of the BWB design. Methods to model these effects and integrate them into an MDO framework were developed. The most important effect modeled is the impact on the propulsive efficiency. There has been conjecture that there will be an increase in propulsive efficiency when there is blowing out of the trailing edge of a wing. A mathematical formulation was derived to explain this. The formulation showed that the jet 'fills in' the wake behind the body, improving the overall aerodynamic/propulsion system, resulting in an increased propulsive efficiency. The distributed propulsion concept also replaces the conventional elevons with a vectored thrust system for longitudinal control. An extension of Spence's Jet Flap theory was developed to estimate the effects of this vectored thrust system on the aircraft longitudinal control. It was found to provide a reasonable estimate of the control capability of the aircraft. An MDO framework was developed, integrating all the distributed propulsion effects modeled. Using a gradient based optimization algorithm, the distributed propulsion BWB aircraft was optimized and compared with a similarly optimized conventional BWB design. Both designs are for an 800 passenger, 0.85 cruise Mach number and 7000 nmi mission. The MDO results found that the distributed propulsion BWB aircraft has a 4% takeoff gross weight and a 2% fuel weight. Both designs have similar planform shapes, although the planform area of the distributed propulsion BWB design is 10% smaller. Through parametric studies, it was also found that the aircraft was most sensitive to the amount of savings in propulsive efficiency and the weight of the ducts used to divert the engine exhaust.

  8. Health benefit modelling and optimization of vehicular pollution control strategies

    NASA Astrophysics Data System (ADS)

    Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra

    2012-12-01

    This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific control strategies with maximization of social benefits, when these strategies are applied simultaneously.

  9. Quantitative impurity analysis of monoclonal antibody size heterogeneity by CE-LIF: example of development and validation through a quality-by-design framework.

    PubMed

    Michels, David A; Parker, Monica; Salas-Solano, Oscar

    2012-03-01

    This paper describes the framework of quality by design applied to the development, optimization and validation of a sensitive capillary electrophoresis-sodium dodecyl sulfate (CE-SDS) assay for monitoring impurities that potentially impact drug efficacy or patient safety produced in the manufacture of therapeutic MAb products. Drug substance or drug product samples are derivatized with fluorogenic 3-(2-furoyl)quinoline-2-carboxaldehyde and nucleophilic cyanide before separation by CE-SDS coupled to LIF detection. Three design-of-experiments enabled critical labeling parameters to meet method requirements for detecting minor impurities while building precision and robustness into the assay during development. The screening design predicted optimal conditions to control labeling artifacts while two full factorial designs demonstrated method robustness through control of temperature and cyanide parameters within the normal operating range. Subsequent validation according to the guidelines of the International Committee of Harmonization showed the CE-SDS/LIF assay was specific, accurate, and precise (RSD ≤ 0.8%) for relative peak distribution and linear (R > 0.997) between the range of 0.5-1.5 mg/mL with LOD and LOQ of 10 ng/mL and 35 ng/mL, respectively. Validation confirmed the system suitability criteria used as a level of control to ensure reliable method performance. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Power-rate-distortion analysis for wireless video communication under energy constraint

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Liang, Yongfang; Ahmad, Ishfaq

    2004-01-01

    In video coding and streaming over wireless communication network, the power-demanding video encoding operates on the mobile devices with limited energy supply. To analyze, control, and optimize the rate-distortion (R-D) behavior of the wireless video communication system under the energy constraint, we need to develop a power-rate-distortion (P-R-D) analysis framework, which extends the traditional R-D analysis by including another dimension, the power consumption. Specifically, in this paper, we analyze the encoding mechanism of typical video encoding systems and develop a parametric video encoding architecture which is fully scalable in computational complexity. Using dynamic voltage scaling (DVS), a hardware technology recently developed in CMOS circuits design, the complexity scalability can be translated into the power consumption scalability of the video encoder. We investigate the rate-distortion behaviors of the complexity control parameters and establish an analytic framework to explore the P-R-D behavior of the video encoding system. Both theoretically and experimentally, we show that, using this P-R-D model, the encoding system is able to automatically adjust its complexity control parameters to match the available energy supply of the mobile device while maximizing the picture quality. The P-R-D model provides a theoretical guideline for system design and performance optimization in wireless video communication under energy constraint, especially over the wireless video sensor network.

  11. Optimization-based additive decomposition of weakly coercive problems with applications

    DOE PAGES

    Bochev, Pavel B.; Ridzal, Denis

    2016-01-27

    In this study, we present an abstract mathematical framework for an optimization-based additive decomposition of a large class of variational problems into a collection of concurrent subproblems. The framework replaces a given monolithic problem by an equivalent constrained optimization formulation in which the subproblems define the optimization constraints and the objective is to minimize the mismatch between their solutions. The significance of this reformulation stems from the fact that one can solve the resulting optimality system by an iterative process involving only solutions of the subproblems. Consequently, assuming that stable numerical methods and efficient solvers are available for every subproblem,more » our reformulation leads to robust and efficient numerical algorithms for a given monolithic problem by breaking it into subproblems that can be handled more easily. An application of the framework to the Oseen equations illustrates its potential.« less

  12. Shape Optimization and Modular Discretization for the Development of a Morphing Wingtip

    NASA Astrophysics Data System (ADS)

    Morley, Joshua

    Better knowledge in the areas of aerodynamics and optimization has allowed designers to develop efficient wingtip structures in recent years. However, the requirements faced by wingtip devices can be considerably different amongst an aircraft's flight regimes. Traditional static wingtip devices are then a compromise between conflicting requirements, resulting in less than optimal performance within each regime. Alternatively, a morphing wingtip can reconfigure leading to improved performance over a range of dissimilar flight conditions. Developed within this thesis, is a modular morphing wingtip concept that centers on the use of variable geometry truss mechanisms to permit morphing. A conceptual design framework is established to aid in the development of the concept. The framework uses a metaheuristic optimization procedure to determine optimal continuous wingtip configurations. The configurations are then discretized for the modular concept. The functionality of the framework is demonstrated through a design study on a hypothetical wing/winglet within the thesis.

  13. Rapid indirect trajectory optimization on highly parallel computing architectures

    NASA Astrophysics Data System (ADS)

    Antony, Thomas

    Trajectory optimization is a field which can benefit greatly from the advantages offered by parallel computing. The current state-of-the-art in trajectory optimization focuses on the use of direct optimization methods, such as the pseudo-spectral method. These methods are favored due to their ease of implementation and large convergence regions while indirect methods have largely been ignored in the literature in the past decade except for specific applications in astrodynamics. It has been shown that the shortcomings conventionally associated with indirect methods can be overcome by the use of a continuation method in which complex trajectory solutions are obtained by solving a sequence of progressively difficult optimization problems. High performance computing hardware is trending towards more parallel architectures as opposed to powerful single-core processors. Graphics Processing Units (GPU), which were originally developed for 3D graphics rendering have gained popularity in the past decade as high-performance, programmable parallel processors. The Compute Unified Device Architecture (CUDA) framework, a parallel computing architecture and programming model developed by NVIDIA, is one of the most widely used platforms in GPU computing. GPUs have been applied to a wide range of fields that require the solution of complex, computationally demanding problems. A GPU-accelerated indirect trajectory optimization methodology which uses the multiple shooting method and continuation is developed using the CUDA platform. The various algorithmic optimizations used to exploit the parallelism inherent in the indirect shooting method are described. The resulting rapid optimal control framework enables the construction of high quality optimal trajectories that satisfy problem-specific constraints and fully satisfy the necessary conditions of optimality. The benefits of the framework are highlighted by construction of maximum terminal velocity trajectories for a hypothetical long range weapon system. The techniques used to construct an initial guess from an analytic near-ballistic trajectory and the methods used to formulate the necessary conditions of optimality in a manner that is transparent to the designer are discussed. Various hypothetical mission scenarios that enforce different combinations of initial, terminal, interior point and path constraints demonstrate the rapid construction of complex trajectories without requiring any a-priori insight into the structure of the solutions. Trajectory problems of this kind were previously considered impractical to solve using indirect methods. The performance of the GPU-accelerated solver is found to be 2x--4x faster than MATLAB's bvp4c, even while running on GPU hardware that is five years behind the state-of-the-art.

  14. Optimal control of a coupled partial and ordinary differential equations system for the assimilation of polarimetry Stokes vector measurements in tokamak free-boundary equilibrium reconstruction with application to ITER

    NASA Astrophysics Data System (ADS)

    Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric

    2017-08-01

    The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.

  15. Optimal deployment of resources for maximizing impact in spreading processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lokhov, Andrey Y.; Saad, David

    The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of “influential spreaders” for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distributionmore » of available resources hence results from an interplay between network topology and spreading dynamics. Here, we show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on a variety of real-world examples.« less

  16. Optimal deployment of resources for maximizing impact in spreading processes

    DOE PAGES

    Lokhov, Andrey Y.; Saad, David

    2017-09-12

    The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of “influential spreaders” for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distributionmore » of available resources hence results from an interplay between network topology and spreading dynamics. Here, we show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on a variety of real-world examples.« less

  17. Integration of a CAD System Into an MDO Framework

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.; Samareh, J. A.; Weston, R. P.; Zorumski, W. E.

    1998-01-01

    NASA Langley has developed a heterogeneous distributed computing environment, called the Framework for Inter-disciplinary Design Optimization, or FIDO. Its purpose has been to demonstrate framework technical feasibility and usefulness for optimizing the preliminary design of complex systems and to provide a working environment for testing optimization schemes. Its initial implementation has been for a simplified model of preliminary design of a high-speed civil transport. Upgrades being considered for the FIDO system include a more complete geometry description, required by high-fidelity aerodynamics and structures codes and based on a commercial Computer Aided Design (CAD) system. This report presents the philosophy behind some of the decisions that have shaped the FIDO system and gives a brief case study of the problems and successes encountered in integrating a CAD system into the FEDO framework.

  18. Adjoint Algorithm for CAD-Based Shape Optimization Using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.

    2004-01-01

    Adjoint solutions of the governing flow equations are becoming increasingly important for the development of efficient analysis and optimization algorithms. A well-known use of the adjoint method is gradient-based shape optimization. Given an objective function that defines some measure of performance, such as the lift and drag functionals, its gradient is computed at a cost that is essentially independent of the number of design variables (geometric parameters that control the shape). More recently, emerging adjoint applications focus on the analysis problem, where the adjoint solution is used to drive mesh adaptation, as well as to provide estimates of functional error bounds and corrections. The attractive feature of this approach is that the mesh-adaptation procedure targets a specific functional, thereby localizing the mesh refinement and reducing computational cost. Our focus is on the development of adjoint-based optimization techniques for a Cartesian method with embedded boundaries.12 In contrast t o implementations on structured and unstructured grids, Cartesian methods decouple the surface discretization from the volume mesh. This feature makes Cartesian methods well suited for the automated analysis of complex geometry problems, and consequently a promising approach to aerodynamic optimization. Melvin et developed an adjoint formulation for the TRANAIR code, which is based on the full-potential equation with viscous corrections. More recently, Dadone and Grossman presented an adjoint formulation for the Euler equations. In both approaches, a boundary condition is introduced to approximate the effects of the evolving surface shape that results in accurate gradient computation. Central to automated shape optimization algorithms is the issue of geometry modeling and control. The need to optimize complex, "real-life" geometry provides a strong incentive for the use of parametric-CAD systems within the optimization procedure. In previous work, we presented an effective optimization framework that incorporates a direct-CAD interface. In this work, we enhance the capabilities of this framework with efficient gradient computations using the discrete adjoint method. We present details of the adjoint numerical implementation, which reuses the domain decomposition, multigrid, and time-marching schemes of the flow solver. Furthermore, we explain and demonstrate the use of CAD in conjunction with the Cartesian adjoint approach. The final paper will contain a number of complex geometry, industrially relevant examples with many design variables to demonstrate the effectiveness of the adjoint method on Cartesian meshes.

  19. A general framework for multicharacter segmentation and its application in recognizing multilingual Asian documents

    NASA Astrophysics Data System (ADS)

    Wen, Di; Ding, Xiaoqing

    2003-12-01

    In this paper we propose a general framework for character segmentation in complex multilingual documents, which is an endeavor to combine the traditionally separated segmentation and recognition processes into a cooperative system. The framework contains three basic steps: Dissection, Local Optimization and Global Optimization, which are designed to fuse various properties of the segmentation hypotheses hierarchically into a composite evaluation to decide the final recognition results. Experimental results show that this framework is general enough to be applied in variety of documents. A sample system based on this framework to recognize Chinese, Japanese and Korean documents and experimental performance is reported finally.

  20. Statistical model based iterative reconstruction in clinical CT systems. Part III. Task-based kV/mAs optimization for radiation dose reduction

    PubMed Central

    Li, Ke; Gomez-Cardona, Daniel; Hsieh, Jiang; Lubner, Meghan G.; Pickhardt, Perry J.; Chen, Guang-Hong

    2015-01-01

    Purpose: For a given imaging task and patient size, the optimal selection of x-ray tube potential (kV) and tube current-rotation time product (mAs) is pivotal in achieving the maximal radiation dose reduction while maintaining the needed diagnostic performance. Although contrast-to-noise (CNR)-based strategies can be used to optimize kV/mAs for computed tomography (CT) imaging systems employing the linear filtered backprojection (FBP) reconstruction method, a more general framework needs to be developed for systems using the nonlinear statistical model-based iterative reconstruction (MBIR) method. The purpose of this paper is to present such a unified framework for the optimization of kV/mAs selection for both FBP- and MBIR-based CT systems. Methods: The optimal selection of kV and mAs was formulated as a constrained optimization problem to minimize the objective function, Dose(kV,mAs), under the constraint that the achievable detectability index d′(kV,mAs) is not lower than the prescribed value of d℞′ for a given imaging task. Since it is difficult to analytically model the dependence of d′ on kV and mAs for the highly nonlinear MBIR method, this constrained optimization problem is solved with comprehensive measurements of Dose(kV,mAs) and d′(kV,mAs) at a variety of kV–mAs combinations, after which the overlay of the dose contours and d′ contours is used to graphically determine the optimal kV–mAs combination to achieve the lowest dose while maintaining the needed detectability for the given imaging task. As an example, d′ for a 17 mm hypoattenuating liver lesion detection task was experimentally measured with an anthropomorphic abdominal phantom at four tube potentials (80, 100, 120, and 140 kV) and fifteen mA levels (25 and 50–700) with a sampling interval of 50 mA at a fixed rotation time of 0.5 s, which corresponded to a dose (CTDIvol) range of [0.6, 70] mGy. Using the proposed method, the optimal kV and mA that minimized dose for the prescribed detectability level of d℞′=16 were determined. As another example, the optimal kV and mA for an 8 mm hyperattenuating liver lesion detection task were also measured using the developed framework. Both an in vivo animal and human subject study were used as demonstrations of how the developed framework can be applied to the clinical work flow. Results: For the first task, the optimal kV and mAs were measured to be 100 and 500, respectively, for FBP, which corresponded to a dose level of 24 mGy. In comparison, the optimal kV and mAs for MBIR were 80 and 150, respectively, which corresponded to a dose level of 4 mGy. The topographies of the iso-d′ map and the iso-CNR map were the same for FBP; thus, the use of d′- and CNR-based optimization methods generated the same results for FBP. However, the topographies of the iso-d′ and iso-CNR map were significantly different in MBIR; the CNR-based method overestimated the performance of MBIR, predicting an overly aggressive dose reduction factor. For the second task, the developed framework generated the following optimization results: for FBP, kV = 140, mA = 350, dose = 37.5 mGy; for MBIR, kV = 120, mA = 250, dose = 18.8 mGy. Again, the CNR-based method overestimated the performance of MBIR. Results of the preliminary in vivo studies were consistent with those of the phantom experiments. Conclusions: A unified and task-driven kV/mAs optimization framework has been developed in this work. The framework is applicable to both linear and nonlinear CT systems such as those using the MBIR method. As expected, the developed framework can be reduced to the conventional CNR-based kV/mAs optimization frameworks if the system is linear. For MBIR-based nonlinear CT systems, however, the developed task-based kV/mAs optimization framework is needed to achieve the maximal dose reduction while maintaining the desired diagnostic performance. PMID:26328971

  1. An inverse problem of determining the implied volatility in option pricing

    NASA Astrophysics Data System (ADS)

    Deng, Zui-Cha; Yu, Jian-Ning; Yang, Liu

    2008-04-01

    In the Black-Scholes world there is the important quantity of volatility which cannot be observed directly but has a major impact on the option value. In practice, traders usually work with what is known as implied volatility which is implied by option prices observed in the market. In this paper, we use an optimal control framework to discuss an inverse problem of determining the implied volatility when the average option premium, namely the average value of option premium corresponding with a fixed strike price and all possible maturities from the current time to a chosen future time, is known. The issue is converted into a terminal control problem by Green function method. The existence and uniqueness of the minimum of the control functional are addressed by the optimal control method, and the necessary condition which must be satisfied by the minimum is also given. The results obtained in the paper may be useful for those who engage in risk management or volatility trading.

  2. Stochastic modelling of slow-progressing tumors: Analysis and applications to the cell interplay and control of low grade gliomas

    NASA Astrophysics Data System (ADS)

    Rodríguez, Clara Rojas; Fernández Calvo, Gabriel; Ramis-Conde, Ignacio; Belmonte-Beitia, Juan

    2017-08-01

    Tumor-normal cell interplay defines the course of a neoplastic malignancy. The outcome of this dual relation is the ultimate prevailing of one of the cells and the death or retreat of the other. In this paper we study the mathematical principles that underlay one important scenario: that of slow-progressing cancers. For this, we develop, within a stochastic framework, a mathematical model to account for tumor-normal cell interaction in such a clinically relevant situation and derive a number of deterministic approximations from the stochastic model. We consider in detail the existence and uniqueness of the solutions of the deterministic model and study the stability analysis. We then focus our model to the specific case of low grade gliomas, where we introduce an optimal control problem for different objective functionals under the administration of chemotherapy. We derive the conditions for which singular and bang-bang control exist and calculate the optimal control and states.

  3. Evaluating large-scale propensity score performance through real-world and synthetic data experiments.

    PubMed

    Tian, Yuxi; Schuemie, Martijn J; Suchard, Marc A

    2018-06-22

    Propensity score adjustment is a popular approach for confounding control in observational studies. Reliable frameworks are needed to determine relative propensity score performance in large-scale studies, and to establish optimal propensity score model selection methods. We detail a propensity score evaluation framework that includes synthetic and real-world data experiments. Our synthetic experimental design extends the 'plasmode' framework and simulates survival data under known effect sizes, and our real-world experiments use a set of negative control outcomes with presumed null effect sizes. In reproductions of two published cohort studies, we compare two propensity score estimation methods that contrast in their model selection approach: L1-regularized regression that conducts a penalized likelihood regression, and the 'high-dimensional propensity score' (hdPS) that employs a univariate covariate screen. We evaluate methods on a range of outcome-dependent and outcome-independent metrics. L1-regularization propensity score methods achieve superior model fit, covariate balance and negative control bias reduction compared with the hdPS. Simulation results are mixed and fluctuate with simulation parameters, revealing a limitation of simulation under the proportional hazards framework. Including regularization with the hdPS reduces commonly reported non-convergence issues but has little effect on propensity score performance. L1-regularization incorporates all covariates simultaneously into the propensity score model and offers propensity score performance superior to the hdPS marginal screen.

  4. Experimental analysis of chaotic neural network models for combinatorial optimization under a unifying framework.

    PubMed

    Kwok, T; Smith, K A

    2000-09-01

    The aim of this paper is to study both the theoretical and experimental properties of chaotic neural network (CNN) models for solving combinatorial optimization problems. Previously we have proposed a unifying framework which encompasses the three main model types, namely, Chen and Aihara's chaotic simulated annealing (CSA) with decaying self-coupling, Wang and Smith's CSA with decaying timestep, and the Hopfield network with chaotic noise. Each of these models can be represented as a special case under the framework for certain conditions. This paper combines the framework with experimental results to provide new insights into the effect of the chaotic neurodynamics of each model. By solving the N-queen problem of various sizes with computer simulations, the CNN models are compared in different parameter spaces, with optimization performance measured in terms of feasibility, efficiency, robustness and scalability. Furthermore, characteristic chaotic neurodynamics crucial to effective optimization are identified, together with a guide to choosing the corresponding model parameters.

  5. Optimal moment determination in POME-copula based hydrometeorological dependence modelling

    NASA Astrophysics Data System (ADS)

    Liu, Dengfeng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi

    2017-07-01

    Copula has been commonly applied in multivariate modelling in various fields where marginal distribution inference is a key element. To develop a flexible, unbiased mathematical inference framework in hydrometeorological multivariate applications, the principle of maximum entropy (POME) is being increasingly coupled with copula. However, in previous POME-based studies, determination of optimal moment constraints has generally not been considered. The main contribution of this study is the determination of optimal moments for POME for developing a coupled optimal moment-POME-copula framework to model hydrometeorological multivariate events. In this framework, margins (marginals, or marginal distributions) are derived with the use of POME, subject to optimal moment constraints. Then, various candidate copulas are constructed according to the derived margins, and finally the most probable one is determined, based on goodness-of-fit statistics. This optimal moment-POME-copula framework is applied to model the dependence patterns of three types of hydrometeorological events: (i) single-site streamflow-water level; (ii) multi-site streamflow; and (iii) multi-site precipitation, with data collected from Yichang and Hankou in the Yangtze River basin, China. Results indicate that the optimal-moment POME is more accurate in margin fitting and the corresponding copulas reflect a good statistical performance in correlation simulation. Also, the derived copulas, capturing more patterns which traditional correlation coefficients cannot reflect, provide an efficient way in other applied scenarios concerning hydrometeorological multivariate modelling.

  6. Poster — Thur Eve — 61: A new framework for MPERT plan optimization using MC-DAO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, M; Lloyd, S AM; Townson, R

    2014-08-15

    This work combines the inverse planning technique known as Direct Aperture Optimization (DAO) with Intensity Modulated Radiation Therapy (IMRT) and combined electron and photon therapy plans. In particular, determining conditions under which Modulated Photon/Electron Radiation Therapy (MPERT) produces better dose conformality and sparing of organs at risk than traditional IMRT plans is central to the project. Presented here are the materials and methods used to generate and manipulate the DAO procedure. Included is the introduction of a powerful Java-based toolkit, the Aperture-based Monte Carlo (MC) MPERT Optimizer (AMMO), that serves as a framework for optimization and provides streamlined access tomore » underlying particle transport packages. Comparison of the toolkit's dose calculations to those produced by the Eclipse TPS and the demonstration of a preliminary optimization are presented as first benchmarks. Excellent agreement is illustrated between the Eclipse TPS and AMMO for a 6MV photon field. The results of a simple optimization shows the functioning of the optimization framework, while significant research remains to characterize appropriate constraints.« less

  7. Optimal control of soybean aphid in the presence of natural enemies and the implied value of their ecosystem services.

    PubMed

    Zhang, Wei; Swinton, Scott M

    2012-04-15

    By suppressing pest populations, natural enemies provide an important ecosystem service that maintains the stability of agricultural ecosystems systems and potentially mitigates producers' pest control costs. Integrating natural control services into decisions about pesticide-based control has the potential to significantly improve the economic efficiency of pesticide use, with socially desirable outcomes. Two gaps have hindered the incorporation of natural enemies into pest management decision rules: (1) insufficient knowledge of pest and predator population dynamics and (2) lack of a decision framework for the economic tradeoffs among pest control options. Using a new intra-seasonal, dynamic bioeconomic optimization model, this study assesses how predation by natural enemies contributes to profit-maximizing pest management strategies. The model is applied to the management of the invasive soybean aphid, the most significant serious insect threat to soybean production in North America. The resulting lower bound estimate of the value of natural pest control ecosystem services was estimated at $84 million for the states of Illinois, Indiana, Iowa, Michigan and Minnesota in 2005. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. An algorithmic framework for multiobjective optimization.

    PubMed

    Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.

  9. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  10. An Integrated Framework for Parameter-based Optimization of Scientific Workflows.

    PubMed

    Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel

    2009-01-01

    Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.

  11. Safer passenger car front shapes for pedestrians: A computational approach to reduce overall pedestrian injury risk in realistic impact scenarios.

    PubMed

    Li, Guibing; Yang, Jikuang; Simms, Ciaran

    2017-03-01

    Vehicle front shape has a significant influence on pedestrian injuries and the optimal design for overall pedestrian protection remains an elusive goal, especially considering the variability of vehicle-to-pedestrian accident scenarios. Therefore this study aims to develop and evaluate an efficient framework for vehicle front shape optimization for pedestrian protection accounting for the broad range of real world impact scenarios and their distributions in recent accident data. Firstly, a framework for vehicle front shape optimization for pedestrian protection was developed based on coupling of multi-body simulations and a genetic algorithm. This framework was then applied for optimizing passenger car front shape for pedestrian protection, and its predictions were evaluated using accident data and kinematic analyses. The results indicate that the optimization shows a good convergence and predictions of the optimization framework are corroborated when compared to the available accident data, and the optimization framework can distinguish 'good' and 'poor' vehicle front shapes for pedestrian safety. Thus, it is feasible and reliable to use the optimization framework for vehicle front shape optimization for reducing overall pedestrian injury risk. The results also show the importance of considering the broad range of impact scenarios in vehicle front shape optimization. A safe passenger car for overall pedestrian protection should have a wide and flat bumper (covering pedestrians' legs from the lower leg up to the shaft of the upper leg with generally even contacts), a bonnet leading edge height around 750mm, a short bonnet (<800mm) with a shallow or steep angle (either >17° or <12°) and a shallow windscreen (≤30°). Sensitivity studies based on simulations at the population level indicate that the demands for a safe passenger car front shape for head and leg protection are generally consistent, but partially conflict with pelvis protection. In particular, both head and leg injury risk increase with increasing bumper lower height and depth, and decrease with increasing bonnet leading edge height, while pelvis injury risk increases with increasing bonnet leading edge height. However, the effects of bonnet leading edge height and windscreen design on head injury risk are complex and require further analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Design optimization for active twist rotor blades

    NASA Astrophysics Data System (ADS)

    Mok, Ji Won

    This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to explore the nonlinear design space of complex planform. Especially for this case, detailed design is carried out to make the actual blade manufacturable. The proposed optimization framework is shown to be an effective tool to design high authority active twist blades to reduce vibration in future helicopter rotor blades.

  13. Dynamic optimization and its relation to classical and quantum constrained systems

    NASA Astrophysics Data System (ADS)

    Contreras, Mauricio; Pellicer, Rely; Villena, Marcelo

    2017-08-01

    We study the structure of a simple dynamic optimization problem consisting of one state and one control variable, from a physicist's point of view. By using an analogy to a physical model, we study this system in the classical and quantum frameworks. Classically, the dynamic optimization problem is equivalent to a classical mechanics constrained system, so we must use the Dirac method to analyze it in a correct way. We find that there are two second-class constraints in the model: one fix the momenta associated with the control variables, and the other is a reminder of the optimal control law. The dynamic evolution of this constrained system is given by the Dirac's bracket of the canonical variables with the Hamiltonian. This dynamic results to be identical to the unconstrained one given by the Pontryagin equations, which are the correct classical equations of motion for our physical optimization problem. In the same Pontryagin scheme, by imposing a closed-loop λ-strategy, the optimality condition for the action gives a consistency relation, which is associated to the Hamilton-Jacobi-Bellman equation of the dynamic programming method. A similar result is achieved by quantizing the classical model. By setting the wave function Ψ(x , t) =e iS(x , t) in the quantum Schrödinger equation, a non-linear partial equation is obtained for the S function. For the right-hand side quantization, this is the Hamilton-Jacobi-Bellman equation, when S(x , t) is identified with the optimal value function. Thus, the Hamilton-Jacobi-Bellman equation in Bellman's maximum principle, can be interpreted as the quantum approach of the optimization problem.

  14. Adaptive Multi-Agent Systems for Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.

  15. Optimism and well-being: A prospective multi-method and multi-dimensional examination of optimism as a resilience factor following the occurrence of stressful life events

    PubMed Central

    Kleiman, Evan M.; Chiara, Alexandra M.; Liu, Richard T.; Jager-Hyman, Shari G.; Choi, Jimmy Y.; Alloy, Lauren B.

    2016-01-01

    Optimism has been conceptualized variously as positive expectations for the future (Scheier & Carver, 1985), optimistic attributions (Peterson & Seligman, 1984), illusion of control (Alloy & Abramson, 1979), and self-enhancing biases (Weinstein, 1980). Relatively little research has examined these multiple dimensions of optimism in relation to psychological and physical health. The current study assessed the multidimensional nature of optimism within a prospective vulnerability-stress framework. Initial principal component analyses revealed the following dimensions: Positive Expectations (PE), Inferential Style (IS), Sense of Invulnerability (SI), and Overconfidence (O). Prospective follow-up analyses demonstrated that PE was associated with fewer depressive episodes and moderated the effect of stressful life events on depressive symptoms. SI also moderated the effect of life stress on anxiety symptoms. Generally, our findings indicated that optimism is a multifaceted construct and not all forms of optimism have the same effects on well-being. Specifically, our findings indicted that PE may be the most relevant to depression, whereas SI may be the most relevant to anxiety. PMID:26558316

  16. Fleet Assignment Using Collective Intelligence

    NASA Technical Reports Server (NTRS)

    Antoine, Nicolas E.; Bieniawski, Stefan R.; Kroo, Ilan M.; Wolpert, David H.

    2004-01-01

    Product distribution theory is a new collective intelligence-based framework for analyzing and controlling distributed systems. Its usefulness in distributed stochastic optimization is illustrated here through an airline fleet assignment problem. This problem involves the allocation of aircraft to a set of flights legs in order to meet passenger demand, while satisfying a variety of linear and non-linear constraints. Over the course of the day, the routing of each aircraft is determined in order to minimize the number of required flights for a given fleet. The associated flow continuity and aircraft count constraints have led researchers to focus on obtaining quasi-optimal solutions, especially at larger scales. In this paper, the authors propose the application of this new stochastic optimization algorithm to a non-linear objective cold start fleet assignment problem. Results show that the optimizer can successfully solve such highly-constrained problems (130 variables, 184 constraints).

  17. Optimal Control of Connected and Automated Vehicles at Roundabouts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Liuhui; Malikopoulos, Andreas; Rios-Torres, Jackeline

    Connectivity and automation in vehicles provide the most intriguing opportunity for enabling users to better monitor transportation network conditions and make better operating decisions to improve safety and reduce pollution, energy consumption, and travel delays. This study investigates the implications of optimally coordinating vehicles that are wirelessly connected to each other and to an infrastructure in roundabouts to achieve a smooth traffic flow without stop-and-go driving. We apply an optimization framework and an analytical solution that allows optimal coordination of vehicles for merging in such traffic scenario. The effectiveness of the efficiency of the proposed approach is validated through simulationmore » and it is shown that coordination of vehicles can reduce total travel time by 3~49% and fuel consumption by 2~27% with respect to different traffic levels. In addition, network throughput is improved by up to 25% due to elimination of stop-and-go driving behavior.« less

  18. Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DallAnese, Emiliano; Baker, Kyri; Summers, Tyler

    This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less

  19. Optimal population prediction of sandhill crane recruitment based on climate-mediated habitat limitations.

    PubMed

    Gerber, Brian D; Kendall, William L; Hooten, Mevin B; Dubovsky, James A; Drewien, Roderick C

    2015-09-01

    1. Prediction is fundamental to scientific enquiry and application; however, ecologists tend to favour explanatory modelling. We discuss a predictive modelling framework to evaluate ecological hypotheses and to explore novel/unobserved environmental scenarios to assist conservation and management decision-makers. We apply this framework to develop an optimal predictive model for juvenile (<1 year old) sandhill crane Grus canadensis recruitment of the Rocky Mountain Population (RMP). We consider spatial climate predictors motivated by hypotheses of how drought across multiple time-scales and spring/summer weather affects recruitment. 2. Our predictive modelling framework focuses on developing a single model that includes all relevant predictor variables, regardless of collinearity. This model is then optimized for prediction by controlling model complexity using a data-driven approach that marginalizes or removes irrelevant predictors from the model. Specifically, we highlight two approaches of statistical regularization, Bayesian least absolute shrinkage and selection operator (LASSO) and ridge regression. 3. Our optimal predictive Bayesian LASSO and ridge regression models were similar and on average 37% superior in predictive accuracy to an explanatory modelling approach. Our predictive models confirmed a priori hypotheses that drought and cold summers negatively affect juvenile recruitment in the RMP. The effects of long-term drought can be alleviated by short-term wet spring-summer months; however, the alleviation of long-term drought has a much greater positive effect on juvenile recruitment. The number of freezing days and snowpack during the summer months can also negatively affect recruitment, while spring snowpack has a positive effect. 4. Breeding habitat, mediated through climate, is a limiting factor on population growth of sandhill cranes in the RMP, which could become more limiting with a changing climate (i.e. increased drought). These effects are likely not unique to cranes. The alteration of hydrological patterns and water levels by drought may impact many migratory, wetland nesting birds in the Rocky Mountains and beyond. 5. Generalizable predictive models (trained by out-of-sample fit and based on ecological hypotheses) are needed by conservation and management decision-makers. Statistical regularization improves predictions and provides a general framework for fitting models with a large number of predictors, even those with collinearity, to simultaneously identify an optimal predictive model while conducting rigorous Bayesian model selection. Our framework is important for understanding population dynamics under a changing climate and has direct applications for making harvest and habitat management decisions. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  20. Research on regularized mean-variance portfolio selection strategy with modified Roy safety-first principle.

    PubMed

    Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan

    2016-01-01

    We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.

  1. Integrated chassis control for a three-axle electric bus with distributed driving motors and active rear steering system

    NASA Astrophysics Data System (ADS)

    Liu, Wei; He, Hongwen; Sun, Fengchun; Lv, Jiangyi

    2017-05-01

    This paper describes an integrated chassis control framework for a novel three-axle electric bus with active rear steering (ARS) axle and four motors at the middle and rear wheels. The proposed integrated framework consists of four parts: (1) an active speed limiting controller is designed for anti-body slip control and rollover prevention; (2) an ARS controller is designed for coordinating the tyre wear between the driving wheels; (3) an inter-axle torque distribution controller is designed for optimal torque distribution between the axles, considering anti-wheel slip and battery power limitations and (4) a data acquisition and estimation module for collecting the measured and estimated vehicle states. To verify the performances, a simulation platform is established in Trucksim software combined with Simulink. Three test cases are particularly designed to show the performances. The proposed algorithm is compared with a simple even control algorithm. The test results show satisfactory lateral stability and rollover prevention performances under severe steering conditions. The desired tyre wear coordinating performance is also realised, and the wheel slip ratios are restricted within stable region during intensive driving and emergency braking with complicated road conditions.

  2. Feasibility of Decentralized Linear-Quadratic-Gaussian Control of Autonomous Distributed Spacecraft

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    1999-01-01

    A distributed satellite formation, modeled as an arbitrary number of fully connected nodes in a network, could be controlled using a decentralized controller framework that distributes operations in parallel over the network. For such problems, a solution that minimizes data transmission requirements, in the context of linear-quadratic-Gaussian (LQG) control theory, was given by Speyer. This approach is advantageous because it is non-hierarchical, detected failures gracefully degrade system performance, fewer local computations are required than for a centralized controller, and it is optimal with respect to the standard LQG cost function. Disadvantages of the approach are the need for a fully connected communications network, the total operations performed over all the nodes are greater than for a centralized controller, and the approach is formulated for linear time-invariant systems. To investigate the feasibility of the decentralized approach to satellite formation flying, a simple centralized LQG design for a spacecraft orbit control problem is adapted to the decentralized framework. The simple design uses a fixed reference trajectory (an equatorial, Keplerian, circular orbit), and by appropriate choice of coordinates and measurements is formulated as a linear time-invariant system.

  3. Consolidating strategic planning and operational frameworks for integrated vector management in Eritrea.

    PubMed

    Chanda, Emmanuel; Ameneshewa, Birkinesh; Mihreteab, Selam; Berhane, Araia; Zehaie, Assefash; Ghebrat, Yohannes; Usman, Abdulmumini

    2015-12-02

    Contemporary malaria vector control relies on the use of insecticide-based, indoor residual spraying (IRS) and long-lasting insecticidal nets (LLINs). However, malaria-endemic countries, including Eritrea, have struggled to effectively deploy these tools due technical and operational challenges, including the selection of insecticide resistance in malaria vectors. This manuscript outlines the processes undertaken in consolidating strategic planning and operational frameworks for vector control to expedite malaria elimination in Eritrea. The effort to strengthen strategic frameworks for vector control in Eritrea was the 'case' for this study. The integrated vector management (IVM) strategy was developed in 2010 but was not well executed, resulting in a rise in malaria transmission, prompting a process to redefine and relaunch the IVM strategy with integration of other vector borne diseases (VBDs) as the focus. The information sources for this study included all available data and accessible archived documentary records on malaria vector control in Eritrea. Structured literature searches of published, peer-reviewed sources using online, scientific, bibliographic databases, Google Scholar, PubMed and WHO, and a combination of search terms were utilized to gather data. The literature was reviewed and adapted to the local context and translated into the consolidated strategic framework. In Eritrea, communities are grappling with the challenge of VBDs posing public health concerns, including malaria. The global fund financed the scale-up of IRS and LLIN programmes in 2014. Eritrea is transitioning towards malaria elimination and strategic frameworks for vector control have been consolidated by: developing an integrated vector management (IVM) strategy (2015-2019); updating IRS and larval source management (LSM) guidelines; developing training manuals for IRS and LSM; training of national staff in malaria entomology and vector control, including insecticide resistance monitoring techniques; initiating the global plan for insecticide resistance management; conducting needs' assessments and developing standard operating procedure for insectaries; developing a guidance document on malaria vector control based on eco-epidemiological strata, a vector surveillance plan and harmonized mapping, data collection and reporting tools. Eritrea has successfully consolidated strategic frameworks for vector control. Rational decision-making remains critical to ensure that the interventions are effective and their choice is evidence-based, and to optimize the use of resources for vector control. Implementation of effective IVM requires proper collaboration and coordination, consistent technical and financial capacity and support to offer greater benefits.

  4. Network planning under uncertainties

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2008-11-01

    One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a generic framework for solving the network planning problem under uncertainties. In addition to reviewing the various network planning problems involving uncertainties, we also propose that a unified framework based on robust optimization can be used to solve a rather large segment of network planning problem under uncertainties. Robust optimization is first introduced in the operations research literature and is a framework that incorporates information about the uncertainty sets for the parameters in the optimization model. Even though robust optimization is originated from tackling the uncertainty in the optimization process, it can serve as a comprehensive and suitable framework for tackling generic network planning problems under uncertainties. In this paper, we begin by explaining the main ideas behind the robust optimization approach. Then we demonstrate the capabilities of the proposed framework by giving out some examples of how the robust optimization framework can be applied to the current common network planning problems under uncertain environments. Next, we list some practical considerations for solving the network planning problem under uncertainties with the proposed framework. Finally, we conclude this article with some thoughts on the future directions for applying this framework to solve other network planning problems.

  5. Optimizing Wartime Materiel Delivery: An Overview of DoD containerization. Volume 2. Framework for Action to Address DoD Containerization Issues

    DOT National Transportation Integrated Search

    1988-10-01

    This second volume of the study entitled, Optimizing Wartime Materiel Delivery: An Overview of DOD Containerization Efforts, -outlines a framework for action to address containerization issues identified in Volume I. The objectives of the study inclu...

  6. NEMA NU 4-Optimized Reconstructions for Therapy Assessment in Cancer Research with the Inveon Small Animal PET/CT System.

    PubMed

    Lasnon, Charline; Dugue, Audrey Emmanuelle; Briand, Mélanie; Blanc-Fournier, Cécile; Dutoit, Soizic; Louis, Marie-Hélène; Aide, Nicolas

    2015-06-01

    We compared conventional filtered back-projection (FBP), two-dimensional-ordered subsets expectation maximization (OSEM) and maximum a posteriori (MAP) NEMA NU 4-optimized reconstructions for therapy assessment. Varying reconstruction settings were used to determine the parameters for optimal image quality with two NEMA NU 4 phantom acquisitions. Subsequently, data from two experiments in which nude rats bearing subcutaneous tumors had received a dual PI3K/mTOR inhibitor were reconstructed with the NEMA NU 4-optimized parameters. Mann-Whitney tests were used to compare mean standardized uptake value (SUV(mean)) variations among groups. All NEMA NU 4-optimized reconstructions showed the same 2-deoxy-2-[(18)F]fluoro-D-glucose ([(18)F]FDG) kinetic patterns and detected a significant difference in SUV(mean) relative to day 0 between controls and treated groups for all time points with comparable p values. In the framework of therapy assessment in rats bearing subcutaneous tumors, all algorithms available on the Inveon system performed equally.

  7. WE-EF-207-01: FEATURED PRESENTATION and BEST IN PHYSICS (IMAGING): Task-Driven Imaging for Cone-Beam CT in Interventional Guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gang, G; Stayman, J; Ouadah, S

    2015-06-15

    Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less

  8. Challenges in the Clinical Application of the American Society of Clinical Oncology Value Framework: A Medicare Cost-Benefit Analysis in Chronic Lymphocytic Leukemia.

    PubMed

    Seymour, Erlene K; Schiffer, Charles A; de Souza, Jonas A

    2017-12-01

    The ASCO Value Framework calculates the value of cancer therapies. Given costly novel therapeutics for chronic lymphocytic leukemia, we used the framework to compare net health benefit (NHB) and cost within Medicare of all regimens listed in the National Comprehensive Cancer Network (NCCN) guidelines. The current NCCN guidelines for chronic lymphocytic leukemia were reviewed. All referenced studies were screened, and only randomized controlled prospective trials were included. The revised ASCO Value Framework was used to calculate NHB. Medicare drug pricing was used to calculate the cost of therapies. Forty-nine studies were screened. The following observations were made: only 10 studies (20%) could be evaluated; when comparing regimens studied against the same control arm, ranking NHB scores were comparable to their preference in guidelines; NHB scores varied depending on which variables were used, and there were no clinically validated thresholds for low or high values; treatment-related deaths were not weighted in the toxicity scores; and six of the 10 studies used less potent control arms, ranked as the least-preferred NCCN-recommended regimens. The ASCO Value Framework is an important initial step to quantify value of therapies. Essential limitations include the lack of clinically relevant validated thresholds for NHB scores and lack of incorporation of grade 5 toxicities/treatment-related mortality into its methodology. To optimize its application for clinical practice, we urge investigators/sponsors to incorporate and report the required variables to calculate the NHB of regimens and encourage trials with stronger comparator arms to properly quantify the relative value of therapies.

  9. Bimetallic Metal-Organic Frameworks for Controlled Catalytic Graphitization of Nanoporous Carbons

    PubMed Central

    Tang, Jing; Salunkhe, Rahul R.; Zhang, Huabin; Malgras, Victor; Ahamad, Tansir; Alshehri, Saad M.; Kobayashi, Naoya; Tominaka, Satoshi; Ide, Yusuke; Kim, Jung Ho; Yamauchi, Yusuke

    2016-01-01

    Single metal-organic frameworks (MOFs), constructed from the coordination between one-fold metal ions and organic linkers, show limited functionalities when used as precursors for nanoporous carbon materials. Herein, we propose to merge the advantages of zinc and cobalt metals ions into one single MOF crystal (i.e., bimetallic MOFs). The organic linkers that coordinate with cobalt ions tend to yield graphitic carbons after carbonization, unlike those bridging with zinc ions, due to the controlled catalytic graphitization by the cobalt nanoparticles. In this work, we demonstrate a feasible method to achieve nanoporous carbon materials with tailored properties, including specific surface area, pore size distribution, degree of graphitization, and content of heteroatoms. The bimetallic-MOF-derived nanoporous carbon are systematically characterized, highlighting the importance of precisely controlling the properties of the carbon materials. This can be done by finely tuning the components in the bimetallic MOF precursors, and thus designing optimal carbon materials for specific applications. PMID:27471193

  10. FPGA-Based Efficient Hardware/Software Co-Design for Industrial Systems with Consideration of Output Selection

    NASA Astrophysics Data System (ADS)

    Deliparaschos, Kyriakos M.; Michail, Konstantinos; Zolotas, Argyrios C.; Tzafestas, Spyros G.

    2016-05-01

    This work presents a field programmable gate array (FPGA)-based embedded software platform coupled with a software-based plant, forming a hardware-in-the-loop (HIL) that is used to validate a systematic sensor selection framework. The systematic sensor selection framework combines multi-objective optimization, linear-quadratic-Gaussian (LQG)-type control, and the nonlinear model of a maglev suspension. A robustness analysis of the closed-loop is followed (prior to implementation) supporting the appropriateness of the solution under parametric variation. The analysis also shows that quantization is robust under different controller gains. While the LQG controller is implemented on an FPGA, the physical process is realized in a high-level system modeling environment. FPGA technology enables rapid evaluation of the algorithms and test designs under realistic scenarios avoiding heavy time penalty associated with hardware description language (HDL) simulators. The HIL technique facilitates significant speed-up in the required execution time when compared to its software-based counterpart model.

  11. An Evolutionary Optimization Framework for Neural Networks and Neuromorphic Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuman, Catherine D; Plank, James; Disney, Adam

    2016-01-01

    As new neural network and neuromorphic architectures are being developed, new training methods that operate within the constraints of the new architectures are required. Evolutionary optimization (EO) is a convenient training method for new architectures. In this work, we review a spiking neural network architecture and a neuromorphic architecture, and we describe an EO training framework for these architectures. We present the results of this training framework on four classification data sets and compare those results to other neural network and neuromorphic implementations. We also discuss how this EO framework may be extended to other architectures.

  12. Managing time-substitutable electricity usage using dynamic controls

    DOEpatents

    Ghosh, Soumyadip; Hosking, Jonathan R.; Natarajan, Ramesh; Subramaniam, Shivaram; Zhang, Xiaoxuan

    2017-02-07

    A predictive-control approach allows an electricity provider to monitor and proactively manage peak and off-peak residential intra-day electricity usage in an emerging smart energy grid using time-dependent dynamic pricing incentives. The daily load is modeled as time-shifted, but cost-differentiated and substitutable, copies of the continuously-consumed electricity resource, and a consumer-choice prediction model is constructed to forecast the corresponding intra-day shares of total daily load according to this model. This is embedded within an optimization framework for managing the daily electricity usage. A series of transformations are employed, including the reformulation-linearization technique (RLT) to obtain a Mixed-Integer Programming (MIP) model representation of the resulting nonlinear optimization problem. In addition, various regulatory and pricing constraints are incorporated in conjunction with the specified profit and capacity utilization objectives.

  13. Managing time-substitutable electricity usage using dynamic controls

    DOEpatents

    Ghosh, Soumyadip; Hosking, Jonathan R.; Natarajan, Ramesh; Subramaniam, Shivaram; Zhang, Xiaoxuan

    2017-02-21

    A predictive-control approach allows an electricity provider to monitor and proactively manage peak and off-peak residential intra-day electricity usage in an emerging smart energy grid using time-dependent dynamic pricing incentives. The daily load is modeled as time-shifted, but cost-differentiated and substitutable, copies of the continuously-consumed electricity resource, and a consumer-choice prediction model is constructed to forecast the corresponding intra-day shares of total daily load according to this model. This is embedded within an optimization framework for managing the daily electricity usage. A series of transformations are employed, including the reformulation-linearization technique (RLT) to obtain a Mixed-Integer Programming (MIP) model representation of the resulting nonlinear optimization problem. In addition, various regulatory and pricing constraints are incorporated in conjunction with the specified profit and capacity utilization objectives.

  14. Dynamics and Control of Three-Dimensional Perching Maneuver under Dynamic Stall Influence

    NASA Astrophysics Data System (ADS)

    Feroskhan, Mir Alikhan Bin Mohammad

    Perching is a type of aggressive maneuver performed by the class 'Aves' species to attain precision point landing with a generally short landing distance. Perching capability is desirable on unmanned aerial vehicles (UAVs) due to its efficient deceleration process that potentially expands the functionality and flight envelope of the aircraft. This dissertation extends the previous works on perching, which is mostly limited to two-dimensional (2D) cases, to its state-of-the-art threedimensional (3D) variety. This dissertation presents the aerodynamic modeling and optimization framework adopted to generate unprecedented variants of the 3D perching maneuver that include the sideslip perching trajectory, which ameliorates the existing 2D perching concept by eliminating the undesirable undershoot and reliance on gravity. The sideslip perching technique methodically utilizes the lateral and longitudinal drag mechanisms through consecutive phases of yawing and pitching-up motion. Since perching maneuver involves high rates of change in the angles of attack and large turn rates, introduction of three internal variables thus becomes necessary for addressing the influence of dynamic stall delay on the UAV's transient post-stall behavior. These variables are then integrated into a static nonlinear aerodynamic model, developed using empirical and analytical methods, and into an optimization framework that generates a trajectory of sideslip perching maneuver, acquiring over 70% velocity reduction. An impact study of the dynamic stall influence on the optimal perching trajectories suggests that consideration of dynamic stall delay is essential due to the significant discrepancies in the corresponding control inputs required. A comparative study between 2D and 3D perching is also conducted to examine the different drag mechanisms employed by 2D and 3D perching respectively. 3D perching is presented as a more efficient deceleration technique with respect to spatial costs and initial altitude range. Contraction analysis is shown to be a useful technique in identifying the state variables that are required to be tracked for attaining stability of optimal perching trajectories. Based on the selected tracking variables, two sliding control strategies are proposed and comparatively examined to close the control loop and provide the required robustness and convergence to the optimal perching trajectory in the presence of perturbations and dynamic stall model inaccuracies. This dissertation concludes that the sliding controller with the adaptive gain feature is more effective and essential in providing better tracking performance through illustrations of the corresponding convergence area and at higher intensity of perturbations.

  15. Tuning the Adsorption-Induced Phase Change in the Flexible Metal–Organic Framework Co(bdp)

    DOE PAGES

    Taylor, Mercedes K.; Runčevski, Tomče; Oktawiec, Julia; ...

    2016-11-02

    Metal–organic frameworks that flex to undergo structural phase changes upon gas adsorption are promising materials for gas storage and separations, and achieving synthetic control over the pressure at which these changes occur is crucial to the design of such materials for specific applications. To this end, a new family of materials based on the flexible metal–organic framework Co(bdp) (bdp 2– = 1,4-benzenedipyrazolate) has been prepared via the introduction of fluorine, deuterium, and methyl functional groups on the bdp 2– ligand, namely, Co(F-bdp), Co(p-F 2-bdp), Co(o-F 2-bdp), Co(D 4-bdp), and Co(p-Me 2-bdp). These frameworks are isoreticular to the parent framework andmore » exhibit similar structural flexibility, transitioning from a low-porosity, collapsed phase to high-porosity, expanded phases with increasing gas pressure. Powder X-ray diffraction studies reveal that fluorination of the aryl ring disrupts edge-to-face π–π interactions, which work to stabilize the collapsed phase at low gas pressures, while deuteration preserves these interactions and methylation strengthens them. In agreement with these observations, high-pressure CH 4 adsorption isotherms show that the pressure of the CH 4-induced framework expansion can be systematically controlled by ligand functionalization, as materials without edge-to-face interactions in the collapsed phase expand at lower CH 4 pressures, while frameworks with strengthened edge-to-face interactions expand at higher pressures. This work puts forth a general design strategy relevant to many other families of flexible metal–organic frameworks, which will be a powerful tool in optimizing these phase-change materials for industrial applications.« less

  16. Optimization and Control of Agent-Based Models in Biology: A Perspective.

    PubMed

    An, G; Fitzpatrick, B G; Christley, S; Federico, P; Kanarek, A; Neilan, R Miller; Oremland, M; Salinas, R; Laubenbacher, R; Lenhart, S

    2017-01-01

    Agent-based models (ABMs) have become an increasingly important mode of inquiry for the life sciences. They are particularly valuable for systems that are not understood well enough to build an equation-based model. These advantages, however, are counterbalanced by the difficulty of analyzing and using ABMs, due to the lack of the type of mathematical tools available for more traditional models, which leaves simulation as the primary approach. As models become large, simulation becomes challenging. This paper proposes a novel approach to two mathematical aspects of ABMs, optimization and control, and it presents a few first steps outlining how one might carry out this approach. Rather than viewing the ABM as a model, it is to be viewed as a surrogate for the actual system. For a given optimization or control problem (which may change over time), the surrogate system is modeled instead, using data from the ABM and a modeling framework for which ready-made mathematical tools exist, such as differential equations, or for which control strategies can explored more easily. Once the optimization problem is solved for the model of the surrogate, it is then lifted to the surrogate and tested. The final step is to lift the optimization solution from the surrogate system to the actual system. This program is illustrated with published work, using two relatively simple ABMs as a demonstration, Sugarscape and a consumer-resource ABM. Specific techniques discussed include dimension reduction and approximation of an ABM by difference equations as well systems of PDEs, related to certain specific control objectives. This demonstration illustrates the very challenging mathematical problems that need to be solved before this approach can be realistically applied to complex and large ABMs, current and future. The paper outlines a research program to address them.

  17. On the use of PGD for optimal control applied to automated fibre placement

    NASA Astrophysics Data System (ADS)

    Bur, N.; Joyot, P.

    2017-10-01

    Automated Fibre Placement (AFP) is an incipient manufacturing process for composite structures. Despite its concep-tual simplicity it involves many complexities related to the necessity of melting the thermoplastic at the interface tape-substrate, ensuring the consolidation that needs the diffusion of molecules and control the residual stresses installation responsible of the residual deformations of the formed parts. The optimisation of the process and the determination of the process window cannot be achieved in a traditional way since it requires a plethora of trials/errors or numerical simulations, because there are many parameters involved in the characterisation of the material and the process. Using reduced order modelling such as the so called Proper Generalised Decomposition method, allows the construction of multi-parametric solution taking into account many parameters. This leads to virtual charts that can be explored on-line in real time in order to perform process optimisation or on-line simulation-based control. Thus, for a given set of parameters, determining the power leading to an optimal temperature becomes easy. However, instead of controlling the power knowing the temperature field by particularizing an abacus, we propose here an approach based on optimal control: we solve by PGD a dual problem from heat equation and optimality criteria. To circumvent numerical issue due to ill-conditioned system, we propose an algorithm based on Uzawa's method. That way, we are able to solve the dual problem, setting the desired state as an extra-coordinate in the PGD framework. In a single computation, we get both the temperature field and the required heat flux to reach a parametric optimal temperature on a given zone.

  18. Quantifying uncertainty in partially specified biological models: how can optimal control theory help us?

    PubMed

    Adamson, M W; Morozov, A Y; Kuzenkov, O A

    2016-09-01

    Mathematical models in biology are highly simplified representations of a complex underlying reality and there is always a high degree of uncertainty with regards to model function specification. This uncertainty becomes critical for models in which the use of different functions fitting the same dataset can yield substantially different predictions-a property known as structural sensitivity. Thus, even if the model is purely deterministic, then the uncertainty in the model functions carries through into uncertainty in model predictions, and new frameworks are required to tackle this fundamental problem. Here, we consider a framework that uses partially specified models in which some functions are not represented by a specific form. The main idea is to project infinite dimensional function space into a low-dimensional space taking into account biological constraints. The key question of how to carry out this projection has so far remained a serious mathematical challenge and hindered the use of partially specified models. Here, we propose and demonstrate a potentially powerful technique to perform such a projection by using optimal control theory to construct functions with the specified global properties. This approach opens up the prospect of a flexible and easy to use method to fulfil uncertainty analysis of biological models.

  19. Does High-Dose Antimicrobial Chemotherapy Prevent the Evolution of Resistance?

    PubMed Central

    Day, Troy; Read, Andrew F.

    2016-01-01

    High-dose chemotherapy has long been advocated as a means of controlling drug resistance in infectious diseases but recent empirical studies have begun to challenge this view. We develop a very general framework for modeling and understanding resistance emergence based on principles from evolutionary biology. We use this framework to show how high-dose chemotherapy engenders opposing evolutionary processes involving the mutational input of resistant strains and their release from ecological competition. Whether such therapy provides the best approach for controlling resistance therefore depends on the relative strengths of these processes. These opposing processes typically lead to a unimodal relationship between drug pressure and resistance emergence. As a result, the optimal drug dose lies at either end of the therapeutic window of clinically acceptable concentrations. We illustrate our findings with a simple model that shows how a seemingly minor change in parameter values can alter the outcome from one where high-dose chemotherapy is optimal to one where using the smallest clinically effective dose is best. A review of the available empirical evidence provides broad support for these general conclusions. Our analysis opens up treatment options not currently considered as resistance management strategies, and it also simplifies the experiments required to determine the drug doses which best retard resistance emergence in patients. PMID:26820986

  20. Does High-Dose Antimicrobial Chemotherapy Prevent the Evolution of Resistance?

    PubMed

    Day, Troy; Read, Andrew F

    2016-01-01

    High-dose chemotherapy has long been advocated as a means of controlling drug resistance in infectious diseases but recent empirical studies have begun to challenge this view. We develop a very general framework for modeling and understanding resistance emergence based on principles from evolutionary biology. We use this framework to show how high-dose chemotherapy engenders opposing evolutionary processes involving the mutational input of resistant strains and their release from ecological competition. Whether such therapy provides the best approach for controlling resistance therefore depends on the relative strengths of these processes. These opposing processes typically lead to a unimodal relationship between drug pressure and resistance emergence. As a result, the optimal drug dose lies at either end of the therapeutic window of clinically acceptable concentrations. We illustrate our findings with a simple model that shows how a seemingly minor change in parameter values can alter the outcome from one where high-dose chemotherapy is optimal to one where using the smallest clinically effective dose is best. A review of the available empirical evidence provides broad support for these general conclusions. Our analysis opens up treatment options not currently considered as resistance management strategies, and it also simplifies the experiments required to determine the drug doses which best retard resistance emergence in patients.

  1. iTOUGH2: A multiphysics simulation-optimization framework for analyzing subsurface systems

    NASA Astrophysics Data System (ADS)

    Finsterle, S.; Commer, M.; Edmiston, J. K.; Jung, Y.; Kowalsky, M. B.; Pau, G. S. H.; Wainwright, H. M.; Zhang, Y.

    2017-11-01

    iTOUGH2 is a simulation-optimization framework for the TOUGH suite of nonisothermal multiphase flow models and related simulators of geophysical, geochemical, and geomechanical processes. After appropriate parameterization of subsurface structures and their properties, iTOUGH2 runs simulations for multiple parameter sets and analyzes the resulting output for parameter estimation through automatic model calibration, local and global sensitivity analyses, data-worth analyses, and uncertainty propagation analyses. Development of iTOUGH2 is driven by scientific challenges and user needs, with new capabilities continually added to both the forward simulator and the optimization framework. This review article provides a summary description of methods and features implemented in iTOUGH2, and discusses the usefulness and limitations of an integrated simulation-optimization workflow in support of the characterization and analysis of complex multiphysics subsurface systems.

  2. Robust Brain-Machine Interface Design Using Optimal Feedback Control Modeling and Adaptive Point Process Filtering

    PubMed Central

    Carmena, Jose M.

    2016-01-01

    Much progress has been made in brain-machine interfaces (BMI) using decoders such as Kalman filters and finding their parameters with closed-loop decoder adaptation (CLDA). However, current decoders do not model the spikes directly, and hence may limit the processing time-scale of BMI control and adaptation. Moreover, while specialized CLDA techniques for intention estimation and assisted training exist, a unified and systematic CLDA framework that generalizes across different setups is lacking. Here we develop a novel closed-loop BMI training architecture that allows for processing, control, and adaptation using spike events, enables robust control and extends to various tasks. Moreover, we develop a unified control-theoretic CLDA framework within which intention estimation, assisted training, and adaptation are performed. The architecture incorporates an infinite-horizon optimal feedback-control (OFC) model of the brain’s behavior in closed-loop BMI control, and a point process model of spikes. The OFC model infers the user’s motor intention during CLDA—a process termed intention estimation. OFC is also used to design an autonomous and dynamic assisted training technique. The point process model allows for neural processing, control and decoder adaptation with every spike event and at a faster time-scale than current decoders; it also enables dynamic spike-event-based parameter adaptation unlike current CLDA methods that use batch-based adaptation on much slower adaptation time-scales. We conducted closed-loop experiments in a non-human primate over tens of days to dissociate the effects of these novel CLDA components. The OFC intention estimation improved BMI performance compared with current intention estimation techniques. OFC assisted training allowed the subject to consistently achieve proficient control. Spike-event-based adaptation resulted in faster and more consistent performance convergence compared with batch-based methods, and was robust to parameter initialization. Finally, the architecture extended control to tasks beyond those used for CLDA training. These results have significant implications towards the development of clinically-viable neuroprosthetics. PMID:27035820

  3. Microprocessor-based integration of microfluidic control for the implementation of automated sensor monitoring and multithreaded optimization algorithms.

    PubMed

    Ezra, Elishai; Maor, Idan; Bavli, Danny; Shalom, Itai; Levy, Gahl; Prill, Sebastian; Jaeger, Magnus S; Nahmias, Yaakov

    2015-08-01

    Microfluidic applications range from combinatorial synthesis to high throughput screening, with platforms integrating analog perfusion components, digitally controlled micro-valves and a range of sensors that demand a variety of communication protocols. Currently, discrete control units are used to regulate and monitor each component, resulting in scattered control interfaces that limit data integration and synchronization. Here, we present a microprocessor-based control unit, utilizing the MS Gadgeteer open framework that integrates all aspects of microfluidics through a high-current electronic circuit that supports and synchronizes digital and analog signals for perfusion components, pressure elements, and arbitrary sensor communication protocols using a plug-and-play interface. The control unit supports an integrated touch screen and TCP/IP interface that provides local and remote control of flow and data acquisition. To establish the ability of our control unit to integrate and synchronize complex microfluidic circuits we developed an equi-pressure combinatorial mixer. We demonstrate the generation of complex perfusion sequences, allowing the automated sampling, washing, and calibrating of an electrochemical lactate sensor continuously monitoring hepatocyte viability following exposure to the pesticide rotenone. Importantly, integration of an optical sensor allowed us to implement automated optimization protocols that require different computational challenges including: prioritized data structures in a genetic algorithm, distributed computational efforts in multiple-hill climbing searches and real-time realization of probabilistic models in simulated annealing. Our system offers a comprehensive solution for establishing optimization protocols and perfusion sequences in complex microfluidic circuits.

  4. Optimization of Systems with Uncertainty: Initial Developments for Performance, Robustness and Reliability Based Designs

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.

  5. Prototype software model for designing intruder detection systems with simulation

    NASA Astrophysics Data System (ADS)

    Smith, Jeffrey S.; Peters, Brett A.; Curry, James C.; Gupta, Dinesh

    1998-08-01

    This article explores using discrete-event simulation for the design and control of defence oriented fixed-sensor- based detection system in a facility housing items of significant interest to enemy forces. The key issues discussed include software development, simulation-based optimization within a modeling framework, and the expansion of the framework to create real-time control tools and training simulations. The software discussed in this article is a flexible simulation environment where the data for the simulation are stored in an external database and the simulation logic is being implemented using a commercial simulation package. The simulation assesses the overall security level of a building against various intruder scenarios. A series of simulation runs with different inputs can determine the change in security level with changes in the sensor configuration, building layout, and intruder/guard strategies. In addition, the simulation model developed for the design stage of the project can be modified to produce a control tool for the testing, training, and real-time control of systems with humans and sensor hardware in the loop.

  6. Dynamic optimization of chemical processes using ant colony framework.

    PubMed

    Rajesh, J; Gupta, K; Kusumakar, H S; Jayaraman, V K; Kulkarni, B D

    2001-11-01

    Ant colony framework is illustrated by considering dynamic optimization of six important bench marking examples. This new computational tool is simple to implement and can tackle problems with state as well as terminal constraints in a straightforward fashion. It requires fewer grid points to reach the global optimum at relatively very low computational effort. The examples with varying degree of complexities, analyzed here, illustrate its potential for solving a large class of process optimization problems in chemical engineering.

  7. An alternative laser driven photodissociation mechanism of pyrrole via πσ*1∕S0 conical intersection.

    PubMed

    Nandipati, K R; Lan, Z; Singh, H; Mahapatra, S

    2017-06-07

    A first principles quantum dynamics study of N-H photodissociation of pyrrole on the S 0 - 1 πσ * (A21) coupled electronic states is carried out with the aid of an optimally designed UV-laser pulse. A new photodissociation path, as compared to the conventional barrier crossing on the πσ*1 state, opens up upon electronic transitions under the influence of pump-dump laser pulses, which efficiently populate both the dissociation channels. The interplay of electronic transitions due both to vibronic coupling and the laser pulse is observed in the control mechanism and discussed in detail. The proposed control mechanism seems to be robust, and not discussed in the literature so far, and is expected to trigger future experiments on the πσ*1 photochemistry of molecules of chemical and biological importance. The design of the optimal pulses and their application to enhance the overall dissociation probability is carried out within the framework of optimal control theory. The quantum dynamics of the system in the presence of pulse is treated by solving the time-dependent Schrödinger equation in the semi-classical dipole approximation.

  8. An alternative laser driven photodissociation mechanism of pyrrole via πσ*1∕S0 conical intersection

    PubMed Central

    Nandipati, K. R.; Lan, Z.; Singh, H.; Mahapatra, S.

    2017-01-01

    A first principles quantum dynamics study of N–H photodissociation of pyrrole on the S0−1πσ*(A21) coupled electronic states is carried out with the aid of an optimally designed UV-laser pulse. A new photodissociation path, as compared to the conventional barrier crossing on the πσ*1 state, opens up upon electronic transitions under the influence of pump-dump laser pulses, which efficiently populate both the dissociation channels. The interplay of electronic transitions due both to vibronic coupling and the laser pulse is observed in the control mechanism and discussed in detail. The proposed control mechanism seems to be robust, and not discussed in the literature so far, and is expected to trigger future experiments on the πσ*1 photochemistry of molecules of chemical and biological importance. The design of the optimal pulses and their application to enhance the overall dissociation probability is carried out within the framework of optimal control theory. The quantum dynamics of the system in the presence of pulse is treated by solving the time-dependent Schrödinger equation in the semi-classical dipole approximation. PMID:28595406

  9. An alternative laser driven photodissociation mechanism of pyrrole via π*1σ/S0 conical intersection

    NASA Astrophysics Data System (ADS)

    Nandipati, K. R.; Lan, Z.; Singh, H.; Mahapatra, S.

    2017-06-01

    A first principles quantum dynamics study of N-H photodissociation of pyrrole on the S0-1π σ*(A12) coupled electronic states is carried out with the aid of an optimally designed UV-laser pulse. A new photodissociation path, as compared to the conventional barrier crossing on the π*1σ state, opens up upon electronic transitions under the influence of pump-dump laser pulses, which efficiently populate both the dissociation channels. The interplay of electronic transitions due both to vibronic coupling and the laser pulse is observed in the control mechanism and discussed in detail. The proposed control mechanism seems to be robust, and not discussed in the literature so far, and is expected to trigger future experiments on the π*1σ photochemistry of molecules of chemical and biological importance. The design of the optimal pulses and their application to enhance the overall dissociation probability is carried out within the framework of optimal control theory. The quantum dynamics of the system in the presence of pulse is treated by solving the time-dependent Schrödinger equation in the semi-classical dipole approximation.

  10. Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming

    NASA Astrophysics Data System (ADS)

    Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita

    2018-03-01

    We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.

  11. Research on bulbous bow optimization based on the improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Sheng-long; Zhang, Bao-ji; Tezdogan, Tahsin; Xu, Le-ping; Lai, Yu-yang

    2017-08-01

    In order to reduce the total resistance of a hull, an optimization framework for the bulbous bow optimization was presented. The total resistance in calm water was selected as the objective function, and the overset mesh technique was used for mesh generation. RANS method was used to calculate the total resistance of the hull. In order to improve the efficiency and smoothness of the geometric reconstruction, the arbitrary shape deformation (ASD) technique was introduced to change the shape of the bulbous bow. To improve the global search ability of the particle swarm optimization (PSO) algorithm, an improved particle swarm optimization (IPSO) algorithm was proposed to set up the optimization model. After a series of optimization analyses, the optimal hull form was found. It can be concluded that the simulation based design framework built in this paper is a promising method for bulbous bow optimization.

  12. Task-driven optimization of CT tube current modulation and regularization in model-based iterative reconstruction

    NASA Astrophysics Data System (ADS)

    Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2017-06-01

    Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.

  13. A computational framework for simultaneous estimation of muscle and joint contact forces and body motion using optimization and surrogate modeling.

    PubMed

    Eskinazi, Ilan; Fregly, Benjamin J

    2018-04-01

    Concurrent estimation of muscle activations, joint contact forces, and joint kinematics by means of gradient-based optimization of musculoskeletal models is hindered by computationally expensive and non-smooth joint contact and muscle wrapping algorithms. We present a framework that simultaneously speeds up computation and removes sources of non-smoothness from muscle force optimizations using a combination of parallelization and surrogate modeling, with special emphasis on a novel method for modeling joint contact as a surrogate model of a static analysis. The approach allows one to efficiently introduce elastic joint contact models within static and dynamic optimizations of human motion. We demonstrate the approach by performing two optimizations, one static and one dynamic, using a pelvis-leg musculoskeletal model undergoing a gait cycle. We observed convergence on the order of seconds for a static optimization time frame and on the order of minutes for an entire dynamic optimization. The presented framework may facilitate model-based efforts to predict how planned surgical or rehabilitation interventions will affect post-treatment joint and muscle function. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks

    NASA Technical Reports Server (NTRS)

    Farrell, Logan C.; Strawser, Phil; Hambuchen, Kimberly; Baker, Will; Badger, Julia

    2017-01-01

    Teleoperation is the dominant form of dexterous robotic tasks in the field. However, there are many use cases in which direct teleoperation is not feasible such as disaster areas with poor communication as posed in the DARPA Robotics Challenge, or robot operations on spacecraft a large distance from Earth with long communication delays. Presented is a solution that combines the Affordance Template Framework for object interaction with TaskForce for supervisory control in order to accomplish high level task objectives with basic autonomous behavior from the robot. TaskForce, is a new commanding infrastructure that allows for optimal development of task execution, clear feedback to the user to aid in off-nominal situations, and the capability to add autonomous verification and corrective actions. This framework has allowed the robot to take corrective actions before requesting assistance from the user. This framework is demonstrated with Robonaut 2 removing a Cargo Transfer Bag from a simulated logistics resupply vehicle for spaceflight using a single operator command. This was executed with 80% success with no human involvement, and 95% success with limited human interaction. This technology sets the stage to do any number of high level tasks using a similar framework, allowing the robot to accomplish tasks with minimal to no human interaction.

  15. Supervision of Facilitators in a Multisite Study: Goals, Process, and Outcomes

    PubMed Central

    2010-01-01

    Objective To describe the aims, implementation, and desired outcomes of facilitator supervision for both interventions (treatment and control) in Project Eban and to present the Eban Theoretical Framework for Supervision that guided the facilitators’ supervision. The qualifications and training of supervisors and facilitators are also described. Design This article provides a detailed description of supervision in a multisite behavioral intervention trial. The Eban Theoretical Framework for Supervision is guided by 3 theories: cognitive behavior therapy, the Life-long Model of Supervision, and “Empowering supervisees to empower others: a culturally responsive supervision model.” Methods Supervision is based on the Eban Theoretical Framework for Supervision, which provides guidelines for implementing both interventions using goals, process, and outcomes. Results Because of effective supervision, the interventions were implemented with fidelity to the protocol and were standard across the multiple sites. Conclusions Supervision of facilitators is a crucial aspect of multisite intervention research quality assurance. It provides them with expert advice, optimizes the effectiveness of facilitators, and increases adherence to the protocol across multiple sites. Based on the experience in this trial, some of the challenges that arise when conducting a multisite randomized control trial and how they can be handled by implementing the Eban Theoretical Framework for Supervision are described. PMID:18724192

  16. Minimizing Uncertainties Impact in Decision Making with an Applicability Study for Economic Power Dispatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hong; Wang, Shaobu; Fan, Rui

    This report summaries the work performed under the LDRD project on the preliminary study on knowledge automation, where specific focus has been made on the investigation of the impact of uncertainties of human decision making onto the optimization of the process operation. At first the statistics on signals from the Brain-Computing Interface (BCI) is analyzed so as to obtain the uncertainties characterization of human operators during the decision making phase using the electroencephalogram (EEG) signals. This is then followed by the discussions of an architecture that reveals the equivalence between optimization and closed loop feedback control design, where it hasmore » been shown that all the optimization problems can be transferred into the control design problem for closed loop systems. This has led to a “closed loop” framework, where the structure of the decision making is shown to be subjected to both process disturbances and controller’s uncertainties. The latter can well represent the uncertainties or randomness occurred during human decision making phase. As a result, a stochastic optimization problem has been formulated and a novel solution has been proposed using probability density function (PDF) shaping for both the cost function and the constraints using stochastic distribution control concept. A sufficient condition has been derived that guarantees the convergence of the optimal solution and discussions have been made for both the total probabilistic solution and chanced constrained optimization which have been well-studied in optimal power flows (OPF) area. A simple case study has been carried out for the economic dispatch of powers for a grid system when there are distributed energy resources (DERs) in the system, and encouraging results have been obtained showing that a significant savings on the generation cost can be expected.« less

  17. Achieving Optimal Best: Instructional Efficiency and the Use of Cognitive Load Theory in Mathematical Problem Solving

    ERIC Educational Resources Information Center

    Phan, Huy P.; Ngu, Bing H.; Yeung, Alexander S.

    2017-01-01

    We recently developed the "Framework of Achievement Bests" to explain the importance of effective functioning, personal growth, and enrichment of well-being experiences. This framework postulates a concept known as "optimal achievement best," which stipulates the idea that individuals may, in general, strive to achieve personal…

  18. Web Service Distributed Management Framework for Autonomic Server Virtualization

    NASA Astrophysics Data System (ADS)

    Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea

    Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.

  19. Popularity versus similarity in growing networks

    NASA Astrophysics Data System (ADS)

    Krioukov, Dmitri; Papadopoulos, Fragkiskos; Kitsak, Maksim; Serrano, Mariangeles; Boguna, Marian

    2012-02-01

    Preferential attachment is a powerful mechanism explaining the emergence of scaling in growing networks. If new connections are established preferentially to more popular nodes in a network, then the network is scale-free. Here we show that not only popularity but also similarity is a strong force shaping the network structure and dynamics. We develop a framework where new connections, instead of preferring popular nodes, optimize certain trade-offs between popularity and similarity. The framework admits a geometric interpretation, in which preferential attachment emerges from local optimization processes. As opposed to preferential attachment, the optimization framework accurately describes large-scale evolution of technological (Internet), social (web of trust), and biological (E.coli metabolic) networks, predicting the probability of new links in them with a remarkable precision. The developed framework can thus be used for predicting new links in evolving networks, and provides a different perspective on preferential attachment as an emergent phenomenon.

  20. A Framework for Cloudy Model Optimization and Database Storage

    NASA Astrophysics Data System (ADS)

    Calvén, Emilia; Helton, Andrew; Sankrit, Ravi

    2018-01-01

    We present a framework for producing Cloudy photoionization models of the nebular emission from novae ejecta and storing a subset of the results in SQL database format for later usage. The database can be searched for models best fitting observed spectral line ratios. Additionally, the framework includes an optimization feature that can be used in tandem with the database to search for and improve on models by creating new Cloudy models while, varying the parameters. The database search and optimization can be used to explore the structures of nebulae by deriving their properties from the best-fit models. The goal is to provide the community with a large database of Cloudy photoionization models, generated from parameters reflecting conditions within novae ejecta, that can be easily fitted to observed spectral lines; either by directly accessing the database using the framework code or by usage of a website specifically made for this purpose.

  1. Task-Driven Tube Current Modulation and Regularization Design in Computed Tomography with Penalized-Likelihood Reconstruction.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2016-02-01

    This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.

  2. A Surrogate Approach to the Experimental Optimization of Multielement Airfoils

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Landman, Drew; Patera, Anthony T.

    1996-01-01

    The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.

  3. Dynamic motion planning of 3D human locomotion using gradient-based optimization.

    PubMed

    Kim, Hyung Joo; Wang, Qian; Rahmatalla, Salam; Swan, Colby C; Arora, Jasbir S; Abdel-Malek, Karim; Assouline, Jose G

    2008-06-01

    Since humans can walk with an infinite variety of postures and limb movements, there is no unique solution to the modeling problem to predict human gait motions. Accordingly, we test herein the hypothesis that the redundancy of human walking mechanisms makes solving for human joint profiles and force time histories an indeterminate problem best solved by inverse dynamics and optimization methods. A new optimization-based human-modeling framework is thus described for predicting three-dimensional human gait motions on level and inclined planes. The basic unknowns in the framework are the joint motion time histories of a 25-degree-of-freedom human model and its six global degrees of freedom. The joint motion histories are calculated by minimizing an objective function such as deviation of the trunk from upright posture that relates to the human model's performance. A variety of important constraints are imposed on the optimization problem, including (1) satisfaction of dynamic equilibrium equations by requiring the model's zero moment point (ZMP) to lie within the instantaneous geometrical base of support, (2) foot collision avoidance, (3) limits on ground-foot friction, and (4) vanishing yawing moment. Analytical forms of objective and constraint functions are presented and discussed for the proposed human-modeling framework in which the resulting optimization problems are solved using gradient-based mathematical programming techniques. When the framework is applied to the modeling of bipedal locomotion on level and inclined planes, acyclic human walking motions that are smooth and realistic as opposed to less natural robotic motions are obtained. The aspects of the modeling framework requiring further investigation and refinement, as well as potential applications of the framework in biomechanics, are discussed.

  4. Application of a COTS Resource Optimization Framework to the SSN Sensor Tasking Domain - Part I: Problem Definition

    NASA Astrophysics Data System (ADS)

    Tran, T.

    With the onset of the SmallSat era, the RSO catalog is expected to see continuing growth in the near future. This presents a significant challenge to the current sensor tasking of the SSN. The Air Force is in need of a sensor tasking system that is robust, efficient, scalable, and able to respond in real-time to interruptive events that can change the tracking requirements of the RSOs. Furthermore, the system must be capable of using processed data from heterogeneous sensors to improve tasking efficiency. The SSN sensor tasking can be regarded as an economic problem of supply and demand: the amount of tracking data needed by each RSO represents the demand side while the SSN sensor tasking represents the supply side. As the number of RSOs to be tracked grows, demand exceeds supply. The decision-maker is faced with the problem of how to allocate resources in the most efficient manner. Braxton recently developed a framework called Multi-Objective Resource Optimization using Genetic Algorithm (MOROUGA) as one of its modern COTS software products. This optimization framework took advantage of the maturing technology of evolutionary computation in the last 15 years. This framework was applied successfully to address the resource allocation of an AFSCN-like problem. In any resource allocation problem, there are five key elements: (1) the resource pool, (2) the tasks using the resources, (3) a set of constraints on the tasks and the resources, (4) the objective functions to be optimized, and (5) the demand levied on the resources. In this paper we explain in detail how the design features of this optimization framework are directly applicable to address the SSN sensor tasking domain. We also discuss our validation effort as well as present the result of the AFSCN resource allocation domain using a prototype based on this optimization framework.

  5. Multidisciplinary Environments: A History of Engineering Framework Development

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Gillian, Ronnie E.

    2006-01-01

    This paper traces the history of engineering frameworks and their use by Multidisciplinary Design Optimization (MDO) practitioners. The approach is to reference papers that have been presented at one of the ten previous Multidisciplinary Analysis and Optimization (MA&O) conferences. By limiting the search to MA&O papers, the authors can (1) identify the key ideas that led to general purpose MDO frameworks and (2) uncover roadblocks that delayed the development of these ideas. The authors make no attempt to assign credit for revolutionary ideas or to assign blame for missed opportunities. Rather, the goal is to trace the various threads of computer architecture and software framework research and to observe how these threads contributed to the commercial framework products available today.

  6. A computational fluid dynamics simulation framework for ventricular catheter design optimization.

    PubMed

    Weisenberg, Sofy H; TerMaath, Stephanie C; Barbier, Charlotte N; Hill, Judith C; Killeffer, James A

    2017-11-10

    OBJECTIVE Cerebrospinal fluid (CSF) shunts are the primary treatment for patients suffering from hydrocephalus. While proven effective in symptom relief, these shunt systems are plagued by high failure rates and often require repeated revision surgeries to replace malfunctioning components. One of the leading causes of CSF shunt failure is obstruction of the ventricular catheter by aggregations of cells, proteins, blood clots, or fronds of choroid plexus that occlude the catheter's small inlet holes or even the full internal catheter lumen. Such obstructions can disrupt CSF diversion out of the ventricular system or impede it entirely. Previous studies have suggested that altering the catheter's fluid dynamics may help to reduce the likelihood of complete ventricular catheter failure caused by obstruction. However, systematic correlation between a ventricular catheter's design parameters and its performance, specifically its likelihood to become occluded, still remains unknown. Therefore, an automated, open-source computational fluid dynamics (CFD) simulation framework was developed for use in the medical community to determine optimized ventricular catheter designs and to rapidly explore parameter influence for a given flow objective. METHODS The computational framework was developed by coupling a 3D CFD solver and an iterative optimization algorithm and was implemented in a high-performance computing environment. The capabilities of the framework were demonstrated by computing an optimized ventricular catheter design that provides uniform flow rates through the catheter's inlet holes, a common design objective in the literature. The baseline computational model was validated using 3D nuclear imaging to provide flow velocities at the inlet holes and through the catheter. RESULTS The optimized catheter design achieved through use of the automated simulation framework improved significantly on previous attempts to reach a uniform inlet flow rate distribution using the standard catheter hole configuration as a baseline. While the standard ventricular catheter design featuring uniform inlet hole diameters and hole spacing has a standard deviation of 14.27% for the inlet flow rates, the optimized design has a standard deviation of 0.30%. CONCLUSIONS This customizable framework, paired with high-performance computing, provides a rapid method of design testing to solve complex flow problems. While a relatively simplified ventricular catheter model was used to demonstrate the framework, the computational approach is applicable to any baseline catheter model, and it is easily adapted to optimize catheters for the unique needs of different patients as well as for other fluid-based medical devices.

  7. YAPPA: a Compiler-Based Parallelization Framework for Irregular Applications on MPSoCs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lovergine, Silvia; Tumeo, Antonino; Villa, Oreste

    Modern embedded systems include hundreds of cores. Because of the difficulty in providing a fast, coherent memory architecture, these systems usually rely on non-coherent, non-uniform memory architectures with private memories for each core. However, programming these systems poses significant challenges. The developer must extract large amounts of parallelism, while orchestrating communication among cores to optimize application performance. These issues become even more significant with irregular applications, which present data sets difficult to partition, unpredictable memory accesses, unbalanced control flow and fine grained communication. Hand-optimizing every single aspect is hard and time-consuming, and it often does not lead to the expectedmore » performance. There is a growing gap between such complex and highly-parallel architectures and the high level languages used to describe the specification, which were designed for simpler systems and do not consider these new issues. In this paper we introduce YAPPA (Yet Another Parallel Programming Approach), a compilation framework for the automatic parallelization of irregular applications on modern MPSoCs based on LLVM. We start by considering an efficient parallel programming approach for irregular applications on distributed memory systems. We then propose a set of transformations that can reduce the development and optimization effort. The results of our initial prototype confirm the correctness of the proposed approach.« less

  8. Hollow carbon nanobubbles: monocrystalline MOF nanobubbles and their pyrolysis† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c6sc04903f Click here for additional data file.

    PubMed Central

    Zhang, Wei; Jiang, Xiangfen; Zhao, Yanyi; Carné-Sánchez, Arnau; Malgras, Victor; Kim, Jeonghun; Kim, Jung Ho; Wang, Shaobin; Jiang, Ji-Sen

    2017-01-01

    While bulk-sized metal–organic frameworks (MOFs) face limits to their utilization in various research fields such as energy storage applications, nanoarchitectonics is believed to be a possible solution. It is highly challenging to realize MOF nanobubbles with monocrystalline frameworks. By a spatially controlled etching approach, here, we can achieve the synthesis of zeolitic imidazolate framework (ZIF-8) nanobubbles with a uniform size of less than 100 nm. Interestingly, the ZIF-8 nanobubbles possess a monocrystalline nanoshell with a thickness of around 10 nm. Under optimal pyrolytic conditions, the ZIF-8 nanobubbles can be converted into hollow carbon nanobubbles while keeping their original shapes. The structure of the nanobubble enhances the fast Na+/K+ ion intercalation performance. Such remarkable improvement cannot be realized by conventional MOFs or their derived carbons. PMID:28580098

  9. Leveraging advances in biology to design biomaterials

    NASA Astrophysics Data System (ADS)

    Darnell, Max; Mooney, David J.

    2017-12-01

    Biomaterials have dramatically increased in functionality and complexity, allowing unprecedented control over the cells that interact with them. From these engineering advances arises the prospect of improved biomaterial-based therapies, yet practical constraints favour simplicity. Tools from the biology community are enabling high-resolution and high-throughput bioassays that, if incorporated into a biomaterial design framework, could help achieve unprecedented functionality while minimizing the complexity of designs by identifying the most important material parameters and biological outputs. However, to avoid data explosions and to effectively match the information content of an assay with the goal of the experiment, material screens and bioassays must be arranged in specific ways. By borrowing methods to design experiments and workflows from the bioprocess engineering community, we outline a framework for the incorporation of next-generation bioassays into biomaterials design to effectively optimize function while minimizing complexity. This framework can inspire biomaterials designs that maximize functionality and translatability.

  10. Optimal dynamic control of invasions: applying a systematic conservation approach.

    PubMed

    Adams, Vanessa M; Setterfield, Samantha A

    2015-06-01

    The social, economic, and environmental impacts of invasive plants are well recognized. However, these variable impacts are rarely accounted for in the spatial prioritization of funding for weed management. We examine how current spatially explicit prioritization methods can be extended to identify optimal budget allocations to both eradication and control measures of invasive species to minimize the costs and likelihood of invasion. Our framework extends recent approaches to systematic prioritization of weed management to account for multiple values that are threatened by weed invasions with a multi-year dynamic prioritization approach. We apply our method to the northern portion of the Daly catchment in the Northern Territory, which has significant conservation values that are threatened by gamba grass (Andropogon gayanus), a highly invasive species recognized by the Australian government as a Weed of National Significance (WONS). We interface Marxan, a widely applied conservation planning tool, with a dynamic biophysical model of gamba grass to optimally allocate funds to eradication and control programs under two budget scenarios comparing maximizing gain (MaxGain) and minimizing loss (MinLoss) optimization approaches. The prioritizations support previous findings that a MinLoss approach is a better strategy when threats are more spatially variable than conservation values. Over a 10-year simulation period, we find that a MinLoss approach reduces future infestations by ~8% compared to MaxGain in the constrained budget scenarios and ~12% in the unlimited budget scenarios. We find that due to the extensive current invasion and rapid rate of spread, allocating the annual budget to control efforts is more efficient than funding eradication efforts when there is a constrained budget. Under a constrained budget, applying the most efficient optimization scenario (control, minloss) reduces spread by ~27% compared to no control. Conversely, if the budget is unlimited it is more efficient to fund eradication efforts and reduces spread by ~65% compared to no control.

  11. Airfoil optimization for unsteady flows with application to high-lift noise reduction

    NASA Astrophysics Data System (ADS)

    Rumpfkeil, Markus Peer

    The use of steady-state aerodynamic optimization methods in the computational fluid dynamic (CFD) community is fairly well established. In particular, the use of adjoint methods has proven to be very beneficial because their cost is independent of the number of design variables. The application of numerical optimization to airframe-generated noise, however, has not received as much attention, but with the significant quieting of modern engines, airframe noise now competes with engine noise. Optimal control techniques for unsteady flows are needed in order to be able to reduce airframe-generated noise. In this thesis, a general framework is formulated to calculate the gradient of a cost function in a nonlinear unsteady flow environment via the discrete adjoint method. The unsteady optimization algorithm developed in this work utilizes a Newton-Krylov approach since the gradient-based optimizer uses the quasi-Newton method BFGS, Newton's method is applied to the nonlinear flow problem, GMRES is used to solve the resulting linear problem inexactly, and last but not least the linear adjoint problem is solved using Bi-CGSTAB. The flow is governed by the unsteady two-dimensional compressible Navier-Stokes equations in conjunction with a one-equation turbulence model, which are discretized using structured grids and a finite difference approach. The effectiveness of the unsteady optimization algorithm is demonstrated by applying it to several problems of interest including shocktubes, pulses in converging-diverging nozzles, rotating cylinders, transonic buffeting, and an unsteady trailing-edge flow. In order to address radiated far-field noise, an acoustic wave propagation program based on the Ffowcs Williams and Hawkings (FW-H) formulation is implemented and validated. The general framework is then used to derive the adjoint equations for a novel hybrid URANS/FW-H optimization algorithm in order to be able to optimize the shape of airfoils based on their calculated far-field pressure fluctuations. Validation and application results for this novel hybrid URANS/FW-H optimization algorithm show that it is possible to optimize the shape of an airfoil in an unsteady flow environment to minimize its radiated far-field noise while maintaining good aerodynamic performance.

  12. FRANOPP: Framework for analysis and optimization problems user's guide

    NASA Technical Reports Server (NTRS)

    Riley, K. M.

    1981-01-01

    Framework for analysis and optimization problems (FRANOPP) is a software aid for the study and solution of design (optimization) problems which provides the driving program and plotting capability for a user generated programming system. In addition to FRANOPP, the programming system also contains the optimization code CONMIN, and two user supplied codes, one for analysis and one for output. With FRANOPP the user is provided with five options for studying a design problem. Three of the options utilize the plot capability and present an indepth study of the design problem. The study can be focused on a history of the optimization process or on the interaction of variables within the design problem.

  13. A Rigorous Framework for Optimization of Expensive Functions by Surrogates

    NASA Technical Reports Server (NTRS)

    Booker, Andrew J.; Dennis, J. E., Jr.; Frank, Paul D.; Serafini, David B.; Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    The goal of the research reported here is to develop rigorous optimization algorithms to apply to some engineering design problems for which design application of traditional optimization approaches is not practical. This paper presents and analyzes a framework for generating a sequence of approximations to the objective function and managing the use of these approximations as surrogates for optimization. The result is to obtain convergence to a minimizer of an expensive objective function subject to simple constraints. The approach is widely applicable because it does not require, or even explicitly approximate, derivatives of the objective. Numerical results are presented for a 31-variable helicopter rotor blade design example and for a standard optimization test example.

  14. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    PubMed

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  15. Simultaneous fault detection and control design for switched systems with two quantized signals.

    PubMed

    Li, Jian; Park, Ju H; Ye, Dan

    2017-01-01

    The problem of simultaneous fault detection and control design for switched systems with two quantized signals is presented in this paper. Dynamic quantizers are employed, respectively, before the output is passed to fault detector, and before the control input is transmitted to the switched system. Taking the quantized errors into account, the robust performance for this kind of system is given. Furthermore, sufficient conditions for the existence of fault detector/controller are presented in the framework of linear matrix inequalities, and fault detector/controller gains and the supremum of quantizer range are derived by a convex optimized method. Finally, two illustrative examples demonstrate the effectiveness of the proposed method. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  16. [Controlling and operation management in hospitals].

    PubMed

    Vagts, Dierk A

    2010-03-01

    The economical pressure on the health system and especially on hospitals is growing rapidly. Hence, economical knowledge for people in medical executive positions becomes imperative. In advanced and forward-looking hospitals controlling is gaining more and more weight, because it takes over a coordinative responsibility. Ideally controlling is navigating the teamwork of managers (CEOs) and medical executives by weighing medical necessities and economical framework. Controlling is contributing to evaluate an optimal efficiency of a hospital in a highly competitive surrounding by providing medical and economical data on a regular basis. A close, open-minded and trusting cooperation between all people, who are involved, is imperative. Hence, controlling in the proper meaning of the word can not flourish in dominant and hierarchic hospital structures. Georg Thieme Verlag Stuttgart * New York.

  17. Frontal Midline Theta Reflects Anxiety and Cognitive Control: Meta-Analytic Evidence

    PubMed Central

    Cavanagh, James F.; Shackman, Alexander J.

    2014-01-01

    Evidence from imaging and anatomical studies suggests that the midcingulate cortex (MCC) is a dynamic hub lying at the interface of affect and cognition. In particular, this neural system appears to integrate information about conflict and punishment in order to optimize behavior in the face of action-outcome uncertainty. In a series of meta-analyses, we show how recent human electrophysiological research provides compelling evidence that frontal-midline theta signals reflecting MCC activity are moderated by anxiety and predict adaptive behavioral adjustments. These findings underscore the importance of frontal theta activity to a broad spectrum of control operations. We argue that frontal-midline theta provides a neurophysiologically plausible mechanism for optimally adjusting behavior to uncertainty, a hallmark of situations that elicit anxiety and demand cognitive control. These observations compel a new perspective on the mechanisms guiding motivated learning and behavior and provide a framework for understanding the role of the MCC in temperament and psychopathology. PMID:24787485

  18. Neural network robust tracking control with adaptive critic framework for uncertain nonlinear systems.

    PubMed

    Wang, Ding; Liu, Derong; Zhang, Yun; Li, Hongyi

    2018-01-01

    In this paper, we aim to tackle the neural robust tracking control problem for a class of nonlinear systems using the adaptive critic technique. The main contribution is that a neural-network-based robust tracking control scheme is established for nonlinear systems involving matched uncertainties. The augmented system considering the tracking error and the reference trajectory is formulated and then addressed under adaptive critic optimal control formulation, where the initial stabilizing controller is not needed. The approximate control law is derived via solving the Hamilton-Jacobi-Bellman equation related to the nominal augmented system, followed by closed-loop stability analysis. The robust tracking control performance is guaranteed theoretically via Lyapunov approach and also verified through simulation illustration. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. iDriving (Intelligent Driving)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malikopoulos, Andreas

    2012-09-17

    iDriving identifies the driving style factors that have a major impact on fuel economy. An optimization framework is used with the aim of optimizing a driving style with respect to these driving factors. A set of polynomial metamodels is constructed to reflect the responses produced in fuel economy by changing the driving factors. The optimization framework is used to develop a real-time feedback system, including visual instructions, to enable drivers to alter their driving styles in responses to actual driving conditions to improve fuel efficiency.

  20. A new approach to mixed H2/H infinity controller synthesis using gradient-based parameter optimization methods

    NASA Technical Reports Server (NTRS)

    Ly, Uy-Loi; Schoemig, Ewald

    1993-01-01

    In the past few years, the mixed H(sub 2)/H-infinity control problem has been the object of much research interest since it allows the incorporation of robust stability into the LQG framework. The general mixed H(sub 2)/H-infinity design problem has yet to be solved analytically. Numerous schemes have considered upper bounds for the H(sub 2)-performance criterion and/or imposed restrictive constraints on the class of systems under investigation. Furthermore, many modern control applications rely on dynamic models obtained from finite-element analysis and thus involve high-order plant models. Hence the capability to design low-order (fixed-order) controllers is of great importance. In this research a new design method was developed that optimizes the exact H(sub 2)-norm of a certain subsystem subject to robust stability in terms of H-infinity constraints and a minimal number of system assumptions. The derived algorithm is based on a differentiable scalar time-domain penalty function to represent the H-infinity constraints in the overall optimization. The scheme is capable of handling multiple plant conditions and hence multiple performance criteria and H-infinity constraints and incorporates additional constraints such as fixed-order and/or fixed structure controllers. The defined penalty function is applicable to any constraint that is expressible in form of a real symmetric matrix-inequity.

  1. Nanoarchitectures for Metal-Organic Framework-Derived Nanoporous Carbons toward Supercapacitor Applications.

    PubMed

    Salunkhe, Rahul R; Kaneti, Yusuf Valentino; Kim, Jeonghun; Kim, Jung Ho; Yamauchi, Yusuke

    2016-12-20

    The future advances of supercapacitors depend on the development of novel carbon materials with optimized porous structures, high surface area, high conductivity, and high electrochemical stability. Traditionally, nanoporous carbons (NPCs) have been prepared by a variety of methods, such as templated synthesis, carbonization of polymer precursors, physical and chemical activation, etc. Inorganic solid materials such as mesoporous silica and zeolites have been successfully utilized as templates to prepare NPCs. However, the hard-templating methods typically involve several synthetic steps, such as preparation of the original templates, formation of carbon frameworks, and removal of the original templates. Therefore, these methods are not favorable for large-scale production. Metal-organic frameworks (MOFs) with high surface areas and large pore volumes have been studied over the years, and recently, enormous efforts have been made to utilize MOFs for electrochemical applications. However, their low conductivity and poor stability still present major challenges toward their practical applications in supercapacitors. MOFs can be used as precursors for the preparation of NPCs with high porosity. Their parent MOFs can be prepared with endless combinations of organic and inorganic constituents by simple coordination chemistry, and it is possible to control their porous architectures, pore volumes, surface areas, etc. These unique properties of MOF-derived NPCs make them highly attractive for many technological applications. Compared with carbonaceous materials prepared using conventional precursors, MOF-derived carbons have significant advantages in terms of a simple synthesis with inherent diversity affording precise control over porous architectures, pore volumes, and surface areas. In this Account, we will summarize our recent research developments on the preparation of three-dimensional (3-D) MOF-derived carbons for supercapacitor applications. This Account will be divided into three main sections: (1) useful background on carbon materials for supercapacitor applications, (2) the importance of MOF-derived carbons, and (3) potential future developments of MOF-derived carbons for supercapacitors. This Account focuses mostly on carbons derived from two types of MOFs, namely, zeolite imidazolate framework-8 (ZIF-8) and ZIF-67. By using examples from our previous works, we will show the uniqueness of these carbons for achieving high performance by control of the chemical reactions/conditions as well proper utilization in asymmetric/symmetric supercapacitor configurations. This Account will promote further developments of MOF-derived multifunctional carbon materials with controlled porous architectures for optimization of their electrochemical performance toward supercapacitor applications.

  2. Optimal Control of Hybrid Systems in Air Traffic Applications

    NASA Astrophysics Data System (ADS)

    Kamgarpour, Maryam

    Growing concerns over the scalability of air traffic operations, air transportation fuel emissions and prices, as well as the advent of communication and sensing technologies motivate improvements to the air traffic management system. To address such improvements, in this thesis a hybrid dynamical model as an abstraction of the air traffic system is considered. Wind and hazardous weather impacts are included using a stochastic model. This thesis focuses on the design of algorithms for verification and control of hybrid and stochastic dynamical systems and the application of these algorithms to air traffic management problems. In the deterministic setting, a numerically efficient algorithm for optimal control of hybrid systems is proposed based on extensions of classical optimal control techniques. This algorithm is applied to optimize the trajectory of an Airbus 320 aircraft in the presence of wind and storms. In the stochastic setting, the verification problem of reaching a target set while avoiding obstacles (reach-avoid) is formulated as a two-player game to account for external agents' influence on system dynamics. The solution approach is applied to air traffic conflict prediction in the presence of stochastic wind. Due to the uncertainty in forecasts of the hazardous weather, and hence the unsafe regions of airspace for aircraft flight, the reach-avoid framework is extended to account for stochastic target and safe sets. This methodology is used to maximize the probability of the safety of aircraft paths through hazardous weather. Finally, the problem of modeling and optimization of arrival air traffic and runway configuration in dense airspace subject to stochastic weather data is addressed. This problem is formulated as a hybrid optimal control problem and is solved with a hierarchical approach that decouples safety and performance. As illustrated with this problem, the large scale of air traffic operations motivates future work on the efficient implementation of the proposed algorithms.

  3. Functional Basis for Efficient Physical Layer Classical Control in Quantum Processors

    NASA Astrophysics Data System (ADS)

    Ball, Harrison; Nguyen, Trung; Leong, Philip H. W.; Biercuk, Michael J.

    2016-12-01

    The rapid progress seen in the development of quantum-coherent devices for information processing has motivated serious consideration of quantum computer architecture and organization. One topic which remains open for investigation and optimization relates to the design of the classical-quantum interface, where control operations on individual qubits are applied according to higher-level algorithms; accommodating competing demands on performance and scalability remains a major outstanding challenge. In this work, we present a resource-efficient, scalable framework for the implementation of embedded physical layer classical controllers for quantum-information systems. Design drivers and key functionalities are introduced, leading to the selection of Walsh functions as an effective functional basis for both programing and controller hardware implementation. This approach leverages the simplicity of real-time Walsh-function generation in classical digital hardware, and the fact that a wide variety of physical layer controls, such as dynamic error suppression, are known to fall within the Walsh family. We experimentally implement a real-time field-programmable-gate-array-based Walsh controller producing Walsh timing signals and Walsh-synthesized analog waveforms appropriate for critical tasks in error-resistant quantum control and noise characterization. These demonstrations represent the first step towards a unified framework for the realization of physical layer controls compatible with large-scale quantum-information processing.

  4. A Framework for Dimensioning VDL-2 Air-Ground Networks

    NASA Technical Reports Server (NTRS)

    Ribeiro, Leila Z.; Monticone, Leone C.; Snow, Richard E.; Box, Frank; Apaza, Rafel; Bretmersky, Steven

    2014-01-01

    This paper describes a framework developed at MITRE for dimensioning a Very High Frequency (VHF) Digital Link Mode 2 (VDL-2) Air-to-Ground network. This framework was developed to support the FAA's Data Communications (Data Comm) program by providing estimates of expected capacity required for the air-ground network services that will support Controller-Pilot-Data-Link Communications (CPDLC), as well as the spectrum needed to operate the system at required levels of performance. The Data Comm program is part of the FAA's NextGen initiative to implement advanced communication capabilities in the National Airspace System (NAS). The first component of the framework is the radio-frequency (RF) coverage design for the network ground stations. Then we proceed to describe the approach used to assess the aircraft geographical distribution and the data traffic demand expected in the network. The next step is the resource allocation utilizing optimization algorithms developed in MITRE's Spectrum ProspectorTM tool to propose frequency assignment solutions, and a NASA-developed VDL-2 tool to perform simulations and determine whether a proposed plan meets the desired performance requirements. The framework presented is capable of providing quantitative estimates of multiple variables related to the air-ground network, in order to satisfy established coverage, capacity and latency performance requirements. Outputs include: coverage provided at different altitudes; data capacity required in the network, aggregated or on a per ground station basis; spectrum (pool of frequencies) needed for the system to meet a target performance; optimized frequency plan for a given scenario; expected performance given spectrum available; and, estimates of throughput distributions for a given scenario. We conclude with a discussion aimed at providing insight into the tradeoffs and challenges identified with respect to radio resource management for VDL-2 air-ground networks.

  5. Optimal Reservoir Operation using Stochastic Model Predictive Control

    NASA Astrophysics Data System (ADS)

    Sahu, R.; McLaughlin, D.

    2016-12-01

    Hydropower operations are typically designed to fulfill contracts negotiated with consumers who need reliable energy supplies, despite uncertainties in reservoir inflows. In addition to providing reliable power the reservoir operator needs to take into account environmental factors such as downstream flooding or compliance with minimum flow requirements. From a dynamical systems perspective, the reservoir operating strategy must cope with conflicting objectives in the presence of random disturbances. In order to achieve optimal performance, the reservoir system needs to continually adapt to disturbances in real time. Model Predictive Control (MPC) is a real-time control technique that adapts by deriving the reservoir release at each decision time from the current state of the system. Here an ensemble-based version of MPC (SMPC) is applied to a generic reservoir to determine both the optimal power contract, considering future inflow uncertainty, and a real-time operating strategy that attempts to satisfy the contract. Contract selection and real-time operation are coupled in an optimization framework that also defines a Pareto trade off between the revenue generated from energy production and the environmental damage resulting from uncontrolled reservoir spills. Further insight is provided by a sensitivity analysis of key parameters specified in the SMPC technique. The results demonstrate that SMPC is suitable for multi-objective planning and associated real-time operation of a wide range of hydropower reservoir systems.

  6. Network Community Detection based on the Physarum-inspired Computational Framework.

    PubMed

    Gao, Chao; Liang, Mingxin; Li, Xianghua; Zhang, Zili; Wang, Zhen; Zhou, Zhili

    2016-12-13

    Community detection is a crucial and essential problem in the structure analytics of complex networks, which can help us understand and predict the characteristics and functions of complex networks. Many methods, ranging from the optimization-based algorithms to the heuristic-based algorithms, have been proposed for solving such a problem. Due to the inherent complexity of identifying network structure, how to design an effective algorithm with a higher accuracy and a lower computational cost still remains an open problem. Inspired by the computational capability and positive feedback mechanism in the wake of foraging process of Physarum, which is a large amoeba-like cell consisting of a dendritic network of tube-like pseudopodia, a general Physarum-based computational framework for community detection is proposed in this paper. Based on the proposed framework, the inter-community edges can be identified from the intra-community edges in a network and the positive feedback of solving process in an algorithm can be further enhanced, which are used to improve the efficiency of original optimization-based and heuristic-based community detection algorithms, respectively. Some typical algorithms (e.g., genetic algorithm, ant colony optimization algorithm, and Markov clustering algorithm) and real-world datasets have been used to estimate the efficiency of our proposed computational framework. Experiments show that the algorithms optimized by Physarum-inspired computational framework perform better than the original ones, in terms of accuracy and computational cost. Moreover, a computational complexity analysis verifies the scalability of our framework.

  7. Computer-Simulation Surrogates for Optimization: Application to Trapezoidal Ducts and Axisymmetric Bodies

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Paraschivoiu, Marius; Yesilyurt, Serhat; Patera, Anthony T.

    1995-01-01

    Engineering design and optimization efforts using computational systems rapidly become resource intensive. The goal of the surrogate-based approach is to perform a complete optimization with limited resources. In this paper we present a Bayesian-validated approach that informs the designer as to how well the surrogate performs; in particular, our surrogate framework provides precise (albeit probabilistic) bounds on the errors incurred in the surrogate-for-simulation substitution. The theory and algorithms of our computer{simulation surrogate framework are first described. The utility of the framework is then demonstrated through two illustrative examples: maximization of the flowrate of fully developed ow in trapezoidal ducts; and design of an axisymmetric body that achieves a target Stokes drag.

  8. Two-phase framework for near-optimal multi-target Lambert rendezvous

    NASA Astrophysics Data System (ADS)

    Bang, Jun; Ahn, Jaemyung

    2018-03-01

    This paper proposes a two-phase framework to obtain a near-optimal solution of multi-target Lambert rendezvous problem. The objective of the problem is to determine the minimum-cost rendezvous sequence and trajectories to visit a given set of targets within a maximum mission duration. The first phase solves a series of single-target rendezvous problems for all departure-arrival object pairs to generate the elementary solutions, which provides candidate rendezvous trajectories. The second phase formulates a variant of traveling salesman problem (TSP) using the elementary solutions prepared in the first phase and determines the final rendezvous sequence and trajectories of the multi-target rendezvous problem. The validity of the proposed optimization framework is demonstrated through an asteroid exploration case study.

  9. Perspectives on MEMS in bioengineering: a novel capacitive position microsensor.

    PubMed

    Pedrocchi, A; Hoen, S; Ferrigno, G; Pedotti, A

    2000-01-01

    We describe a novel capacitive position sensor using micromachining to achieve high sensitivity and large range of motion. These sensors require a new theoretical framework to describe and optimize their performance. Employing a complete description of the electrical fields, the sensor should deviate from the standard geometries used for capacitive sensors. By this optimization, the sensor gains a twofold increase in sensitivity. Results on a PC board 10x model imply that the micromachined sensor should achieve a sensitivity of less than 10 nm over 500-micron range of travel. Some bioengineering applications are addressed, including positioning of micromirrors for laser surgery and dose control for implantable drug delivery systems.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yue J.; Malikopoulos, Andreas; Cassandras, Christos G.

    We address the problem of coordinating online a continuous flow of connected and automated vehicles (CAVs) crossing two adjacent intersections in an urban area. We present a decentralized optimal control framework whose solution yields for each vehicle the optimal acceleration/deceleration at any time in the sense of minimizing fuel consumption. The solu- tion, when it exists, allows the vehicles to cross the intersections without the use of traffic lights, without creating congestion on the connecting road, and under the hard safety constraint of collision avoidance. The effectiveness of the proposed solution is validated through simulation considering two intersections located inmore » downtown Boston, and it is shown that coordination of CAVs can reduce significantly both fuel consumption and travel time.« less

  11. Automated and Cooperative Vehicle Merging at Highway On-Ramps

    DOE PAGES

    Rios-Torres, Jackeline; Malikopoulos, Andreas A.

    2016-08-05

    Recognition of necessities of connected and automated vehicles (CAVs) is gaining momentum. CAVs can improve both transportation network efficiency and safety through control algorithms that can harmonically use all existing information to coordinate the vehicles. This paper addresses the problem of optimally coordinating CAVs at merging roadways to achieve smooth traffic flow without stop-and-go driving. Here we present an optimization framework and an analytical closed-form solution that allows online coordination of vehicles at merging zones. The effectiveness of the efficiency of the proposed solution is validated through a simulation, and it is shown that coordination of vehicles can significantly reducemore » both fuel consumption and travel time.« less

  12. Optimization method for an evolutional type inverse heat conduction problem

    NASA Astrophysics Data System (ADS)

    Deng, Zui-Cha; Yu, Jian-Ning; Yang, Liu

    2008-01-01

    This paper deals with the determination of a pair (q, u) in the heat conduction equation u_t-u_{xx}+q(x,t)u=0, with initial and boundary conditions u(x,0)=u_0(x),\\qquad u_x|_{x=0}=u_x|_{x=1}=0, from the overspecified data u(x, t) = g(x, t). By the time semi-discrete scheme, the problem is transformed into a sequence of inverse problems in which the unknown coefficients are purely space dependent. Based on the optimal control framework, the existence, uniqueness and stability of the solution (q, u) are proved. A necessary condition which is a couple system of a parabolic equation and parabolic variational inequality is deduced.

  13. A Mixed Integer Efficient Global Optimization Framework: Applied to the Simultaneous Aircraft Design, Airline Allocation and Revenue Management Problem

    NASA Astrophysics Data System (ADS)

    Roy, Satadru

    Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.

  14. Grey fuzzy optimization model for water quality management of a river system

    NASA Astrophysics Data System (ADS)

    Karmakar, Subhankar; Mujumdar, P. P.

    2006-07-01

    A grey fuzzy optimization model is developed for water quality management of river system to address uncertainty involved in fixing the membership functions for different goals of Pollution Control Agency (PCA) and dischargers. The present model, Grey Fuzzy Waste Load Allocation Model (GFWLAM), has the capability to incorporate the conflicting goals of PCA and dischargers in a deterministic framework. The imprecision associated with specifying the water quality criteria and fractional removal levels are modeled in a fuzzy mathematical framework. To address the imprecision in fixing the lower and upper bounds of membership functions, the membership functions themselves are treated as fuzzy in the model and the membership parameters are expressed as interval grey numbers, a closed and bounded interval with known lower and upper bounds but unknown distribution information. The model provides flexibility for PCA and dischargers to specify their aspirations independently, as the membership parameters for different membership functions, specified for different imprecise goals are interval grey numbers in place of a deterministic real number. In the final solution optimal fractional removal levels of the pollutants are obtained in the form of interval grey numbers. This enhances the flexibility and applicability in decision-making, as the decision-maker gets a range of optimal solutions for fixing the final decision scheme considering technical and economic feasibility of the pollutant treatment levels. Application of the GFWLAM is illustrated with case study of the Tunga-Bhadra river system in India.

  15. Performance bounds for nonlinear systems with a nonlinear ℒ2-gain property

    NASA Astrophysics Data System (ADS)

    Zhang, Huan; Dower, Peter M.

    2012-09-01

    Nonlinear ℒ2-gain is a finite gain concept that generalises the notion of conventional (linear) finite ℒ2-gain to admit the application of ℒ2-gain analysis tools of a broader class of nonlinear systems. The computation of tight comparison function bounds for this nonlinear ℒ2-gain property is important in applications such as small gain design. This article presents an approximation framework for these comparison function bounds through the formulation and solution of an optimal control problem. Key to the solution of this problem is the lifting of an ℒ2-norm input constraint, which is facilitated via the introduction of an energy saturation operator. This admits the solution of the optimal control problem of interest via dynamic programming and associated numerical methods, leading to the computation of the proposed bounds. Two examples are presented to demonstrate this approach.

  16. A DDS-Based Energy Management Framework for Small Microgrid Operation and Control

    DOE PAGES

    Youssef, Tarek A.; El Hariri, Mohamad; Elsayed, Ahmed T.; ...

    2017-09-26

    The smart grid is seen as a power system with realtime communication and control capabilities between the consumer and the utility. This modern platform facilitates the optimization in energy usage based on several factors including environmental, price preferences, and system technical issues. In this paper a real-time energy management system (EMS) for microgrids or nanogrids was developed. The developed system involves an online optimization scheme to adapt its parameters based on previous, current, and forecasted future system states. The communication requirements for all EMS modules were analyzed and are all integrated over a data distribution service (DDS) Ethernet network withmore » appropriate quality of service (QoS) profiles. In conclusion, the developed EMS was emulated with actual residential energy consumption and irradiance data from Miami, Florida and proved its effectiveness in reducing consumers’ bills and achieving flat peak load profiles.« less

  17. Controlling imported malaria cases in the United States of America.

    PubMed

    Dembele, Bassidy; Yakubu, Abdul-Aziz

    2017-02-01

    We extend the mathematical malaria epidemic model framework of Dembele et al. and use it to ``capture" the 2013 Centers for Disease Control and Prevention (CDC) reported data on the 2011 number of imported malaria cases in the USA. Furthermore, we use our ``fitted" malaria models for the top 20 countries of malaria acquisition by USA residents to study the impact of protecting USA residents from malaria infection when they travel to malaria endemic areas, the impact of protecting residents of malaria endemic regions from mosquito bites and the impact of killing mosquitoes in those endemic areas on the CDC number of imported malaria cases in USA. To significantly reduce the number of imported malaria cases in USA, for each top 20 country of malaria acquisition by USA travelers, we compute the optimal proportion of USA international travelers that must be protected against malaria infection and the optimal proportion of mosquitoes that must be killed.

  18. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis shows that terrestrial carbon and water cycle simulations in monsoon Asia were greatly improved, and the use of multiple satellite observations with this framework is an effective way for improving terrestrial biosphere models.

  19. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  20. Network efficient power control for wireless communication systems.

    PubMed

    Campos-Delgado, Daniel U; Luna-Rivera, Jose Martin; Martinez-Sánchez, C J; Gutierrez, Carlos A; Tecpanecatl-Xihuitl, J L

    2014-01-01

    We introduce a two-loop power control that allows an efficient use of the overall power resources for commercial wireless networks based on cross-layer optimization. This approach maximizes the network's utility in the outer-loop as a function of the averaged signal to interference-plus-noise ratio (SINR) by considering adaptively the changes in the network characteristics. For this purpose, the concavity property of the utility function was verified with respect to the SINR, and an iterative search was proposed with guaranteed convergence. In addition, the outer-loop is in charge of selecting the detector that minimizes the overall power consumption (transmission and detection). Next the inner-loop implements a feedback power control in order to achieve the optimal SINR in the transmissions despite channel variations and roundtrip delays. In our proposal, the utility maximization process and detector selection and feedback power control are decoupled problems, and as a result, these strategies are implemented at two different time scales in the two-loop framework. Simulation results show that substantial utility gains may be achieved by improving the power management in the wireless network.

  1. Network Efficient Power Control for Wireless Communication Systems

    PubMed Central

    Campos-Delgado, Daniel U.; Luna-Rivera, Jose Martin; Martinez-Sánchez, C. J.; Gutierrez, Carlos A.; Tecpanecatl-Xihuitl, J. L.

    2014-01-01

    We introduce a two-loop power control that allows an efficient use of the overall power resources for commercial wireless networks based on cross-layer optimization. This approach maximizes the network's utility in the outer-loop as a function of the averaged signal to interference-plus-noise ratio (SINR) by considering adaptively the changes in the network characteristics. For this purpose, the concavity property of the utility function was verified with respect to the SINR, and an iterative search was proposed with guaranteed convergence. In addition, the outer-loop is in charge of selecting the detector that minimizes the overall power consumption (transmission and detection). Next the inner-loop implements a feedback power control in order to achieve the optimal SINR in the transmissions despite channel variations and roundtrip delays. In our proposal, the utility maximization process and detector selection and feedback power control are decoupled problems, and as a result, these strategies are implemented at two different time scales in the two-loop framework. Simulation results show that substantial utility gains may be achieved by improving the power management in the wireless network. PMID:24683350

  2. MonALISA, an agent-based monitoring and control system for the LHC experiments

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.

    2017-10-01

    MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fifteen years by California Insitute of Technology (Caltech) and its partners with the support of the software and computing program of the CMS and ALICE experiments at the Large Hadron Collider (LHC). The framework is based on Dynamic Distributed Service Architecture and is able to provide complete system monitoring, performance metrics of applications, Jobs or services, system control and global optimization services for complex systems. A short overview and status of MonALISA is given in this paper.

  3. Optimization of wind plant layouts using an adjoint approach

    DOE PAGES

    King, Ryan N.; Dykes, Katherine; Graf, Peter; ...

    2017-03-10

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  4. Optimization of wind plant layouts using an adjoint approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, Ryan N.; Dykes, Katherine; Graf, Peter

    Using adjoint optimization and three-dimensional steady-state Reynolds-averaged Navier–Stokes (RANS) simulations, we present a new gradient-based approach for optimally siting wind turbines within utility-scale wind plants. By solving the adjoint equations of the flow model, the gradients needed for optimization are found at a cost that is independent of the number of control variables, thereby permitting optimization of large wind plants with many turbine locations. Moreover, compared to the common approach of superimposing prescribed wake deficits onto linearized flow models, the computational efficiency of the adjoint approach allows the use of higher-fidelity RANS flow models which can capture nonlinear turbulent flowmore » physics within a wind plant. The steady-state RANS flow model is implemented in the Python finite-element package FEniCS and the derivation and solution of the discrete adjoint equations are automated within the dolfin-adjoint framework. Gradient-based optimization of wind turbine locations is demonstrated for idealized test cases that reveal new optimization heuristics such as rotational symmetry, local speedups, and nonlinear wake curvature effects. Layout optimization is also demonstrated on more complex wind rose shapes, including a full annual energy production (AEP) layout optimization over 36 inflow directions and 5 wind speed bins.« less

  5. Rational risk-based decision support for drinking water well managers by optimized monitoring designs

    NASA Astrophysics Data System (ADS)

    Enzenhöfer, R.; Geiges, A.; Nowak, W.

    2011-12-01

    Advection-based well-head protection zones are commonly used to manage the contamination risk of drinking water wells. Considering the insufficient knowledge about hazards and transport properties within the catchment, current Water Safety Plans recommend that catchment managers and stakeholders know, control and monitor all possible hazards within the catchments and perform rational risk-based decisions. Our goal is to supply catchment managers with the required probabilistic risk information, and to generate tools that allow for optimal and rational allocation of resources between improved monitoring versus extended safety margins and risk mitigation measures. To support risk managers with the indispensable information, we address the epistemic uncertainty of advective-dispersive solute transport and well vulnerability (Enzenhoefer et al., 2011) within a stochastic simulation framework. Our framework can separate between uncertainty of contaminant location and actual dilution of peak concentrations by resolving heterogeneity with high-resolution Monte-Carlo simulation. To keep computational costs low, we solve the reverse temporal moment transport equation. Only in post-processing, we recover the time-dependent solute breakthrough curves and the deduced well vulnerability criteria from temporal moments by non-linear optimization. Our first step towards optimal risk management is optimal positioning of sampling locations and optimal choice of data types to reduce best the epistemic prediction uncertainty for well-head delineation, using the cross-bred Likelihood Uncertainty Estimator (CLUE, Leube et al., 2011) for optimal sampling design. Better monitoring leads to more reliable and realistic protection zones and thus helps catchment managers to better justify smaller, yet conservative safety margins. In order to allow an optimal choice in sampling strategies, we compare the trade-off in monitoring versus the delineation costs by accounting for ill-delineated fractions of protection zones. Within an illustrative simplified 2D synthetic test case, we demonstrate our concept, involving synthetic transmissivity and head measurements for conditioning. We demonstrate the worth of optimally collected data in the context of protection zone delineation by assessing the reduced areal demand of delineated area at user-specified risk acceptance level. Results indicate that, thanks to optimally collected data, risk-aware delineation can be made at low to moderate additional costs compared to conventional delineation strategies.

  6. Economic growth and carbon emission control

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenyu

    The question about whether environmental improvement is compatible with continued economic growth remains unclear and requires further study in a specific context. This study intends to provide insight on the potential for carbon emissions control in the absence of international agreement, and connect the empirical analysis with theoretical framework. The Chinese electricity generation sector is used as a case study to demonstrate the problem. Both social planner and private problems are examined to derive the conditions that define the optimal level of production and pollution. The private problem will be demonstrated under the emission regulation using an emission tax, an input tax and an abatement subsidy respectively. The social optimal emission flow is imposed into the private problem. To provide tractable analytical results, a Cobb-Douglas type production function is used to describe the joint production process of the desired output and undesired output (i.e., electricity and emissions). A modified Hamiltonian approach is employed to solve the system and the steady state solutions are examined for policy implications. The theoretical analysis suggests that the ratio of emissions to desired output (refer to 'emission factor'), is a function of productive capital and other parameters. The finding of non-constant emission factor shows that reducing emissions without further cutting back the production of desired outputs is feasible under some circumstances. Rather than an ad hoc specification, the optimal conditions derived from our theoretical framework are used to examine the relationship between desired output and emission level. Data comes from the China Statistical Yearbook and China Electric Power Yearbook and provincial information of electricity generation for the year of 1993-2003 are used to estimate the Cobb-Douglas type joint production by the full information maximum likelihood (FIML) method. The empirical analysis shed light on the optimal policies of emissions control required for achieving the social goal in a private context. The results suggest that the efficiency of abatement technology is crucial for the timing of executing the emission tax. And emission tax is preferred to an input tax, as long as the detection of emissions is not costly and abatement technology is efficient. Keywords: Economic growth, Carbon emission, Power generation, Joint production, China

  7. Robust model predictive control for multi-step short range spacecraft rendezvous

    NASA Astrophysics Data System (ADS)

    Zhu, Shuyi; Sun, Ran; Wang, Jiaolong; Wang, Jihe; Shao, Xiaowei

    2018-07-01

    This work presents a robust model predictive control (MPC) approach for the multi-step short range spacecraft rendezvous problem. During the specific short range phase concerned, the chaser is supposed to be initially outside the line-of-sight (LOS) cone. Therefore, the rendezvous process naturally includes two steps: the first step is to transfer the chaser into the LOS cone and the second step is to transfer the chaser into the aimed region with its motion confined within the LOS cone. A novel MPC framework named after Mixed MPC (M-MPC) is proposed, which is the combination of the Variable-Horizon MPC (VH-MPC) framework and the Fixed-Instant MPC (FI-MPC) framework. The M-MPC framework enables the optimization for the two steps to be implemented jointly rather than to be separated factitiously, and its computation workload is acceptable for the usually low-power processors onboard spacecraft. Then considering that disturbances including modeling error, sensor noise and thrust uncertainty may induce undesired constraint violations, a robust technique is developed and it is attached to the above M-MPC framework to form a robust M-MPC approach. The robust technique is based on the chance-constrained idea, which ensures that constraints can be satisfied with a prescribed probability. It improves the robust technique proposed by Gavilan et al., because it eliminates the unnecessary conservativeness by explicitly incorporating known statistical properties of the navigation uncertainty. The efficacy of the robust M-MPC approach is shown in a simulation study.

  8. Digital Alchemy for Materials Design: Colloids and Beyond

    NASA Astrophysics Data System (ADS)

    van Anders, Greg; Klotsa, Daphne; Karas, Andrew; Dodd, Paul; Glotzer, Sharon

    Starting with the early alchemists, a holy grail of science has been to make desired materials by manipulating basic building blocks. Building blocks that show promise for assembling new complex materials can be synthesized at the nanoscale with attributes that would astonish the ancient alchemists in their versatility. However, this versatility means that connecting building-block attributes to bulk structure is both necessary for rationally engineering materials and difficult because building block attributes can be altered in many ways. We show how to exploit the malleability of colloidal nanoparticle ``elements'' to quantitatively link building-block attributes to bulk structure through a statistical thermodynamic framework we term ``digital alchemy''. We use this framework to optimize building blocks for a given target structure and to determine which building-block attributes are most important to control for self-assembly, through a set of novel thermodynamic response functions. We thereby establish direct links between the attributes of colloidal building blocks and the bulk structures they form. Moreover, our results give concrete solutions to the more general conceptual challenge of optimizing emergent behaviors in nature and can be applied to other types of matter.

  9. A framework to determine the locations of the environmental monitoring in an estuary of the Yellow Sea.

    PubMed

    Kim, Nam-Hoon; Hwang, Jin Hwan; Cho, Jaegab; Kim, Jae Seong

    2018-06-04

    The characteristics of an estuary are determined by various factors as like as tide, wave, river discharge, etc. which also control the water quality of the estuary. Therefore, detecting the changes of characteristics is critical in managing the environmental qualities and pollution and so the locations of monitoring should be selected carefully. The present study proposes a framework to deploy the monitoring systems based on a graphical method of the spatial and temporal optimizations. With the well-validated numerical simulation results, the monitoring locations are determined to capture the changes of water qualities and pollutants depending on the variations of tide, current and freshwater discharge. The deployment strategy to find the appropriate monitoring locations is designed with the constrained optimization method, which finds solutions by constraining the objective function into the feasible regions. The objective and constrained functions are constructed with the interpolation technique such as objective analysis. Even with the smaller number of the monitoring locations, the present method performs well equivalently to the arbitrarily and evenly deployed monitoring system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Active inference and robot control: a case study

    PubMed Central

    Nizard, Ange; Friston, Karl; Pezzulo, Giovanni

    2016-01-01

    Active inference is a general framework for perception and action that is gaining prominence in computational and systems neuroscience but is less known outside these fields. Here, we discuss a proof-of-principle implementation of the active inference scheme for the control or the 7-DoF arm of a (simulated) PR2 robot. By manipulating visual and proprioceptive noise levels, we show under which conditions robot control under the active inference scheme is accurate. Besides accurate control, our analysis of the internal system dynamics (e.g. the dynamics of the hidden states that are inferred during the inference) sheds light on key aspects of the framework such as the quintessentially multimodal nature of control and the differential roles of proprioception and vision. In the discussion, we consider the potential importance of being able to implement active inference in robots. In particular, we briefly review the opportunities for modelling psychophysiological phenomena such as sensory attenuation and related failures of gain control, of the sort seen in Parkinson's disease. We also consider the fundamental difference between active inference and optimal control formulations, showing that in the former the heavy lifting shifts from solving a dynamical inverse problem to creating deep forward or generative models with dynamics, whose attracting sets prescribe desired behaviours. PMID:27683002

  11. A Control-Theoretic Approach for the Combined Management of Quality-of-Service and Energy in Service Centers

    NASA Astrophysics Data System (ADS)

    Poussot-Vassal, Charles; Tanelli, Mara; Lovera, Marco

    The complexity of Information Technology (IT) systems is steadily increasing and system complexity has been recognised as the main obstacle to further advancements of IT. This fact has recently raised energy management issues. Control techniques have been proposed and successfully applied to design Autonomic Computing systems, trading-off system performance with energy saving goals. As users behaviour is highly time varying and workload conditions can change substantially within the same business day, the Linear Parametrically Varying (LPV) framework is particularly promising for modeling such systems. In this chapter, a control-theoretic method to investigate the trade-off between Quality of Service (QoS) requirements and energy saving objectives in the case of admission control in Web service systems is proposed, considering as control variables the server CPU frequency and the admission probability. To quantitatively evaluate the trade-off, a dynamic model of the admission control dynamics is estimated via LPV identification techniques. Based on this model, an optimisation problem within the Model Predictive Control (MPC) framework is setup, by means of which it is possible to investigate the optimal trade-off policy to manage QoS and energy saving objectives at design time and taking into explicit account the system dynamics.

  12. Boom Minimization Framework for Supersonic Aircraft Using CFD Analysis

    NASA Technical Reports Server (NTRS)

    Ordaz, Irian; Rallabhandi, Sriram K.

    2010-01-01

    A new framework is presented for shape optimization using analytical shape functions and high-fidelity computational fluid dynamics (CFD) via Cart3D. The focus of the paper is the system-level integration of several key enabling analysis tools and automation methods to perform shape optimization and reduce sonic boom footprint. A boom mitigation case study subject to performance, stability and geometrical requirements is presented to demonstrate a subset of the capabilities of the framework. Lastly, a design space exploration is carried out to assess the key parameters and constraints driving the design.

  13. Axiomatic Design of a Framework for the Comprehensive Optimization of Patient Flows in Hospitals

    PubMed Central

    Matt, Dominik T.

    2017-01-01

    Lean Management and Six Sigma are nowadays applied not only to the manufacturing industry but also to service industry and public administration. The manifold variables affecting the Health Care system minimize the effect of a narrow Lean intervention. Therefore, this paper aims to discuss a comprehensive, system-based approach to achieve a factual holistic optimization of patient flows. This paper debates the efficacy of Lean principles applied to the optimization of patient flows and related activities, structures, and resources, developing a theoretical framework based on the principles of the Axiomatic Design. The demand for patient-oriented and efficient health services leads to use these methodologies to improve hospital processes. In the framework, patients with similar characteristics are clustered in families to achieve homogeneous flows through the value stream. An optimization checklist is outlined as the result of the mapping between Functional Requirements and Design Parameters, with the right sequence of the steps to optimize the patient flow according to the principles of Axiomatic Design. The Axiomatic Design-based top-down implementation of Health Care evidence, according to Lean principles, results in a holistic optimization of hospital patient flows, by reducing the complexity of the system. PMID:29065578

  14. Axiomatic Design of a Framework for the Comprehensive Optimization of Patient Flows in Hospitals.

    PubMed

    Arcidiacono, Gabriele; Matt, Dominik T; Rauch, Erwin

    2017-01-01

    Lean Management and Six Sigma are nowadays applied not only to the manufacturing industry but also to service industry and public administration. The manifold variables affecting the Health Care system minimize the effect of a narrow Lean intervention. Therefore, this paper aims to discuss a comprehensive, system-based approach to achieve a factual holistic optimization of patient flows. This paper debates the efficacy of Lean principles applied to the optimization of patient flows and related activities, structures, and resources, developing a theoretical framework based on the principles of the Axiomatic Design. The demand for patient-oriented and efficient health services leads to use these methodologies to improve hospital processes. In the framework, patients with similar characteristics are clustered in families to achieve homogeneous flows through the value stream. An optimization checklist is outlined as the result of the mapping between Functional Requirements and Design Parameters, with the right sequence of the steps to optimize the patient flow according to the principles of Axiomatic Design. The Axiomatic Design-based top-down implementation of Health Care evidence, according to Lean principles, results in a holistic optimization of hospital patient flows, by reducing the complexity of the system.

  15. Axiomatic Design of a Framework for the Comprehensive Optimization of Patient Flows in Hospitals

    PubMed

    Arcidiacono, Gabriele; Matt, Dominik T.; Rauch, Erwin

    2017-01-01

    Lean Management and Six Sigma are nowadays applied not only to the manufacturing industry but also to service industry and public administration. The manifold variables affecting the Health Care system minimize the effect of a narrow Lean intervention. Therefore, this paper aims to discuss a comprehensive, system-based approach to achieve a factual holistic optimization of patient flows. This paper debates the efficacy of Lean principles applied to the optimization of patient flows and related activities, structures, and resources, developing a theoretical framework based on the principles of the Axiomatic Design. The demand for patient-oriented and efficient health services leads to use these methodologies to improve hospital processes. In the framework, patients with similar characteristics are clustered in families to achieve homogeneous flows through the value stream. An optimization checklist is outlined as the result of the mapping between Functional Requirements and Design Parameters, with the right sequence of the steps to optimize the patient flow according to the principles of Axiomatic Design. The Axiomatic Design-based top-down implementation of Health Care evidence, according to Lean principles, results in a holistic optimization of hospital patient flows, by reducing the complexity of the system. © 2017 Gabriele Arcidiacono et al.

  16. Popularity versus similarity in growing networks.

    PubMed

    Papadopoulos, Fragkiskos; Kitsak, Maksim; Serrano, M Ángeles; Boguñá, Marián; Krioukov, Dmitri

    2012-09-27

    The principle that 'popularity is attractive' underlies preferential attachment, which is a common explanation for the emergence of scaling in growing networks. If new connections are made preferentially to more popular nodes, then the resulting distribution of the number of connections possessed by nodes follows power laws, as observed in many real networks. Preferential attachment has been directly validated for some real networks (including the Internet), and can be a consequence of different underlying processes based on node fitness, ranking, optimization, random walks or duplication. Here we show that popularity is just one dimension of attractiveness; another dimension is similarity. We develop a framework in which new connections optimize certain trade-offs between popularity and similarity, instead of simply preferring popular nodes. The framework has a geometric interpretation in which popularity preference emerges from local optimization. As opposed to preferential attachment, our optimization framework accurately describes the large-scale evolution of technological (the Internet), social (trust relationships between people) and biological (Escherichia coli metabolic) networks, predicting the probability of new links with high precision. The framework that we have developed can thus be used for predicting new links in evolving networks, and provides a different perspective on preferential attachment as an emergent phenomenon.

  17. pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

    DOE PAGES

    Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul; ...

    2017-12-20

    We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less

  18. pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John D.; Watson, Jean-Paul

    We describe pyomo.dae, an open source Python-based modeling framework that enables high-level abstract specification of optimization problems with differential and algebraic equations. The pyomo.dae framework is integrated with the Pyomo open source algebraic modeling language, and is available at http://www.pyomo.org. One key feature of pyomo.dae is that it does not restrict users to standard, predefined forms of differential equations, providing a high degree of modeling flexibility and the ability to express constraints that cannot be easily specified in other modeling frameworks. Other key features of pyomo.dae are the ability to specify optimization problems with high-order differential equations and partial differentialmore » equations, defined on restricted domain types, and the ability to automatically transform high-level abstract models into finite-dimensional algebraic problems that can be solved with off-the-shelf solvers. Moreover, pyomo.dae users can leverage existing capabilities of Pyomo to embed differential equation models within stochastic and integer programming models and mathematical programs with equilibrium constraint formulations. Collectively, these features enable the exploration of new modeling concepts, discretization schemes, and the benchmarking of state-of-the-art optimization solvers.« less

  19. History matching through dynamic decision-making

    PubMed Central

    Maschio, Célio; Santos, Antonio Alberto; Schiozer, Denis; Rocha, Anderson

    2017-01-01

    History matching is the process of modifying the uncertain attributes of a reservoir model to reproduce the real reservoir performance. It is a classical reservoir engineering problem and plays an important role in reservoir management since the resulting models are used to support decisions in other tasks such as economic analysis and production strategy. This work introduces a dynamic decision-making optimization framework for history matching problems in which new models are generated based on, and guided by, the dynamic analysis of the data of available solutions. The optimization framework follows a ‘learning-from-data’ approach, and includes two optimizer components that use machine learning techniques, such as unsupervised learning and statistical analysis, to uncover patterns of input attributes that lead to good output responses. These patterns are used to support the decision-making process while generating new, and better, history matched solutions. The proposed framework is applied to a benchmark model (UNISIM-I-H) based on the Namorado field in Brazil. Results show the potential the dynamic decision-making optimization framework has for improving the quality of history matching solutions using a substantial smaller number of simulations when compared with a previous work on the same benchmark. PMID:28582413

  20. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  1. Equilibrium Field Theoretic and Dynamic Mean Field Simulations of Inhomogeneous Polymeric Materials

    NASA Astrophysics Data System (ADS)

    Chao, Huikuan

    Inhomogeneous polymeric materials is a large family of promising materials including but limited to block copolymers (BCPs), polymer nanocomposites (PNCs) and microscopically confined polymer films. The promising application of the materials originates from the materials' unique microstructures, which offer enhanced mechanical, thermal, optical and electrical properties to the materials. Due to the complex interactions and the large parameter space, behaviors of the microstructures formed by grafted nanoparticles and nanorods in PNCs are difficult to understand. Separately, because of relatively weak interactions, the microstructures are typically achieved through rapid processing that are kinetically controlled and beyond equilibrium. However, efficient simulation framework to study nonequilbrium dynamics of the materials is currently not available. To attack the first difficulty, I extended an efficient simulation framework, polymer nanocomposite field theory (PNC-FT), to incorporate grafted nanoparticles and nanorods. This extended framework is demonstrated against existing experimental studies and implemented to study how the nanoparticle design affects the nanoparticle distribution in binary homopolymer blends. The grafted nanoparticle model is also used as a platform to adopt an advanced optimization method to inversely design nanoparticles which are able to self-assemble into targeted two dimensional lattices. The nanorod model under PNC-FT framework is used to investigate the design of nanorod and block copolymer thin films to control the nanorod distribution. To attack the second difficulty, I established an efficient framework (SCMF-LD) based on a recently proposed dynamic mean field theory and used SCMF-LD to study how to kinetically control the nanoparticle distribution at the end of solvent annealing block copolymer thin films. The framework is then extended to incorporate hydrodynamics (SCMF-DPD) and the extended framework is implemented to study morphology development in phase inversion processing polymer thin films, where hydrodynamic effects play an important role. By exploring both equilibrium and nonequilibrium properties in a spectrum of inhomogeneous polymeric material systems, I successfully extended PNC-FT and established SCMF-LD and SCMF-DPD frameworks, which are expected to be efficient and powerful tools in studies of inhomogeneous polymeric material design and processing.

  2. Study on the water resources optimal operation based on riverbed wind erosion control in West Liaohe River plain

    NASA Astrophysics Data System (ADS)

    Wanguang, Sun; Chengzhen, Li; Baoshan, Fan

    2018-06-01

    Rivers are drying up most frequently in West Liaohe River plain and the bare river beds present fine sand belts on land. These sand belts, which yield a dust heavily in windy days, stress the local environment deeply as the riverbeds are eroded by wind. The optimal operation of water resources, thus, is one of the most important methods for preventing the wind erosion of riverbeds. In this paper, optimal operation model for water resources based on riverbed wind erosion control has been established, which contains objective function, constraints, and solution method. The objective function considers factors which include water volume diverted into reservoirs, river length and lower threshold of flow rate, etc. On the basis of ensuring the water requirement of each reservoir, the destruction of the vegetation in the riverbed by the frequent river flow is avoided. The multi core parallel solving method for optimal water resources operation in the West Liaohe River Plain is proposed, which the optimal solution is found by DPSA method under the POA framework and the parallel computing program is designed in Fork/Join mode. Based on the optimal operation results, the basic rules of water resources operation in the West Liaohe River Plain are summarized. Calculation results show that, on the basis of meeting the requirement of water volume of every reservoir, the frequency of reach river flow which from Taihekou to Talagan Water Diversion Project in the Xinkai River is reduced effectively. The speedup and parallel efficiency of parallel algorithm are 1.51 and 0.76 respectively, and the computing time is significantly decreased. The research results show in this paper can provide technical support for the prevention and control of riverbed wind erosion in the West Liaohe River plain.

  3. Fuzzy Linguistic Knowledge Based Behavior Extraction for Building Energy Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Milos Manic

    2013-08-01

    Significant portion of world energy production is consumed by building Heating, Ventilation and Air Conditioning (HVAC) units. Thus along with occupant comfort, energy efficiency is also an important factor in HVAC control. Modern buildings use advanced Multiple Input Multiple Output (MIMO) control schemes to realize these goals. However, since the performance of HVAC units is dependent on many criteria including uncertainties in weather, number of occupants, and thermal state, the performance of current state of the art systems are sub-optimal. Furthermore, because of the large number of sensors in buildings, and the high frequency of data collection, large amount ofmore » information is available. Therefore, important behavior of buildings that compromise energy efficiency or occupant comfort is difficult to identify. This paper presents an easy to use and understandable framework for identifying such behavior. The presented framework uses human understandable knowledge-base to extract important behavior of buildings and present it to users via a graphical user interface. The presented framework was tested on a building in the Pacific Northwest and was shown to be able to identify important behavior that relates to energy efficiency and occupant comfort.« less

  4. Consensus for multi-agent systems with time-varying input delays

    NASA Astrophysics Data System (ADS)

    Yuan, Chengzhi; Wu, Fen

    2017-10-01

    This paper addresses the consensus control problem for linear multi-agent systems subject to uniform time-varying input delays and external disturbance. A novel state-feedback consensus protocol is proposed under the integral quadratic constraint (IQC) framework, which utilises not only the relative state information from neighbouring agents but also the real-time information of delays by means of the dynamic IQC system states for feedback control. Based on this new consensus protocol, the associated IQC-based control synthesis conditions are established and fully characterised as linear matrix inequalities (LMIs), such that the consensus control solution with optimal ? disturbance attenuation performance can be synthesised efficiently via convex optimisation. A numerical example is used to demonstrate the proposed approach.

  5. Parallel Distributed Processing at 25: further explorations in the microstructure of cognition.

    PubMed

    Rogers, Timothy T; McClelland, James L

    2014-08-01

    This paper introduces a special issue of Cognitive Science initiated on the 25th anniversary of the publication of Parallel Distributed Processing (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP framework, the key issues the framework has addressed, and the debates the framework has spawned, and presents viewpoints on the current status of these issues. The articles focus on both historical roots and contemporary developments in learning, optimality theory, perception, memory, language, conceptual knowledge, cognitive control, and consciousness. Here we consider the approach more generally, reviewing the original motivations, the resulting framework, and the central tenets of the underlying theory. We then evaluate the impact of PDP both on the field at large and within specific subdomains of cognitive science and consider the current role of PDP models within the broader landscape of contemporary theoretical frameworks in cognitive science. Looking to the future, we consider the implications for cognitive science of the recent success of machine learning systems called "deep networks"-systems that build on key ideas presented in the PDP volumes. Copyright © 2014 Cognitive Science Society, Inc.

  6. Extraction of Photogenerated Electrons and Holes from a Covalent Organic Framework Integrated Heterojunction

    PubMed Central

    2014-01-01

    Covalent organic frameworks (COFs) offer a strategy to position molecular semiconductors within a rigid network in a highly controlled and predictable manner. The π-stacked columns of layered two-dimensional COFs enable electronic interactions between the COF sheets, thereby providing a path for exciton and charge carrier migration. Frameworks comprising two electronically separated subunits can form highly defined interdigitated donor–acceptor heterojunctions, which can drive the photogeneration of free charge carriers. Here we report the first example of a photovoltaic device that utilizes exclusively a crystalline organic framework with an inherent type II heterojunction as the active layer. The newly developed triphenylene–porphyrin COF was grown as an oriented thin film with the donor and acceptor units forming one-dimensional stacks that extend along the substrate normal, thus providing an optimal geometry for charge carrier transport. As a result of the degree of morphological precision that can be achieved with COFs and the enormous diversity of functional molecular building blocks that can be used to construct the frameworks, these materials show great potential as model systems for organic heterojunctions and might ultimately provide an alternative to the current disordered bulk heterojunctions. PMID:25412210

  7. Optimal digital filtering for tremor suppression.

    PubMed

    Gonzalez, J G; Heredia, E A; Rahman, T; Barner, K E; Arce, G R

    2000-05-01

    Remote manually operated tasks such as those found in teleoperation, virtual reality, or joystick-based computer access, require the generation of an intermediate electrical signal which is transmitted to the controlled subsystem (robot arm, virtual environment, or a cursor in a computer screen). When human movements are distorted, for instance, by tremor, performance can be improved by digitally filtering the intermediate signal before it reaches the controlled device. This paper introduces a novel tremor filtering framework in which digital equalizers are optimally designed through pursuit tracking task experiments. Due to inherent properties of the man-machine system, the design of tremor suppression equalizers presents two serious problems: 1) performance criteria leading to optimizations that minimize mean-squared error are not efficient for tremor elimination and 2) movement signals show ill-conditioned autocorrelation matrices, which often result in useless or unstable solutions. To address these problems, a new performance indicator in the context of tremor is introduced, and the optimal equalizer according to this new criterion is developed. Ill-conditioning of the autocorrelation matrix is overcome using a novel method which we call pulled-optimization. Experiments performed with artificially induced vibrations and a subject with Parkinson's disease show significant improvement in performance. Additional results, along with MATLAB source code of the algorithms, and a customizable demo for PC joysticks, are available on the Internet at http:¿tremor-suppression.com.

  8. Adaptive, Distributed Control of Constrained Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PO) theory was recently developed as a broad framework for analyzing and optimizing distributed systems. Here we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASS), i.e., for distributed stochastic optimization using MAS s. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (Probability dist&&on on the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. One common way to find that equilibrium is to have each agent run a Reinforcement Learning (E) algorithm. PD theory reveals this to be a particular type of search algorithm for minimizing the Lagrangian. Typically that algorithm i s quite inefficient. A more principled alternative is to use a variant of Newton's method to minimize the Lagrangian. Here we compare this alternative to RL-based search in three sets of computer experiments. These are the N Queen s problem and bin-packing problem from the optimization literature, and the Bar problem from the distributed RL literature. Our results confirm that the PD-theory-based approach outperforms the RL-based scheme in all three domains.

  9. Prototyping Control and Data Acquisition for the ITER Neutral Beam Test Facility

    NASA Astrophysics Data System (ADS)

    Luchetta, Adriano; Manduchi, Gabriele; Taliercio, Cesare; Soppelsa, Anton; Paolucci, Francesco; Sartori, Filippo; Barbato, Paolo; Breda, Mauro; Capobianco, Roberto; Molon, Federico; Moressa, Modesto; Polato, Sandro; Simionato, Paola; Zampiva, Enrico

    2013-10-01

    The ITER Neutral Beam Test Facility will be the project's R&D facility for heating neutral beam injectors (HNB) for fusion research operating with H/D negative ions. Its mission is to develop technology to build the HNB prototype injector meeting the stringent HNB requirements (16.5 MW injection power, -1 MeV acceleration energy, 40 A ion current and one hour continuous operation). Two test-beds will be built in sequence in the facility: first SPIDER, the ion source test-bed, to optimize the negative ion source performance, second MITICA, the actual prototype injector, to optimize ion beam acceleration and neutralization. The SPIDER control and data acquisition system is under design. To validate the main architectural choices, a system prototype has been assembled and performance tests have been executed to assess the prototype's capability to meet the control and data acquisition system requirements. The prototype is based on open-source software frameworks running under Linux. EPICS is the slow control engine, MDSplus is the data handler and MARTe is the fast control manager. The prototype addresses low and high-frequency data acquisition, 10 kS/s and 10 MS/s respectively, camera image acquisition, data archiving, data streaming, data retrieval and visualization, real time fast control with 100 μs control cycle and supervisory control.

  10. Optimization of hydrometric monitoring network in urban drainage systems using information theory.

    PubMed

    Yazdi, J

    2017-10-01

    Regular and continuous monitoring of urban runoff in both quality and quantity aspects is of great importance for controlling and managing surface runoff. Due to the considerable costs of establishing new gauges, optimization of the monitoring network is essential. This research proposes an approach for site selection of new discharge stations in urban areas, based on entropy theory in conjunction with multi-objective optimization tools and numerical models. The modeling framework provides an optimal trade-off between the maximum possible information content and the minimum shared information among stations. This approach was applied to the main surface-water collection system in Tehran to determine new optimal monitoring points under the cost considerations. Experimental results on this drainage network show that the obtained cost-effective designs noticeably outperform the consulting engineers' proposal in terms of both information contents and shared information. The research also determined the highly frequent sites at the Pareto front which might be important for decision makers to give a priority for gauge installation on those locations of the network.

  11. Information-theoretic approach to interactive learning

    NASA Astrophysics Data System (ADS)

    Still, S.

    2009-01-01

    The principles of statistical mechanics and information theory play an important role in learning and have inspired both theory and the design of numerous machine learning algorithms. The new aspect in this paper is a focus on integrating feedback from the learner. A quantitative approach to interactive learning and adaptive behavior is proposed, integrating model- and decision-making into one theoretical framework. This paper follows simple principles by requiring that the observer's world model and action policy should result in maximal predictive power at minimal complexity. Classes of optimal action policies and of optimal models are derived from an objective function that reflects this trade-off between prediction and complexity. The resulting optimal models then summarize, at different levels of abstraction, the process's causal organization in the presence of the learner's actions. A fundamental consequence of the proposed principle is that the learner's optimal action policies balance exploration and control as an emerging property. Interestingly, the explorative component is present in the absence of policy randomness, i.e. in the optimal deterministic behavior. This is a direct result of requiring maximal predictive power in the presence of feedback.

  12. Robust design optimization using the price of robustness, robust least squares and regularization methods

    NASA Astrophysics Data System (ADS)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  13. Stochastic optimization of broadband reflecting photonic structures.

    PubMed

    Estrada-Wiese, D; Del Río-Chanona, E A; Del Río, J A

    2018-01-19

    Photonic crystals (PCs) are built to control the propagation of light within their structure. These can be used for an assortment of applications where custom designed devices are of interest. Among them, one-dimensional PCs can be produced to achieve the reflection of specific and broad wavelength ranges. However, their design and fabrication are challenging due to the diversity of periodic arrangement and layer configuration that each different PC needs. In this study, we present a framework to design high reflecting PCs for any desired wavelength range. Our method combines three stochastic optimization algorithms (Random Search, Particle Swarm Optimization and Simulated Annealing) along with a reduced space-search methodology to obtain a custom and optimized PC configuration. The optimization procedure is evaluated through theoretical reflectance spectra calculated by using the Equispaced Thickness Method, which improves the simulations due to the consideration of incoherent light transmission. We prove the viability of our procedure by fabricating different reflecting PCs made of porous silicon and obtain good agreement between experiment and theory using a merit function. With this methodology, diverse reflecting PCs can be designed for any applications and fabricated with different materials.

  14. An improved parent-centric mutation with normalized neighborhoods for inducing niching behavior in differential evolution.

    PubMed

    Biswas, Subhodip; Kundu, Souvik; Das, Swagatam

    2014-10-01

    In real life, we often need to find multiple optimally sustainable solutions of an optimization problem. Evolutionary multimodal optimization algorithms can be very helpful in such cases. They detect and maintain multiple optimal solutions during the run by incorporating specialized niching operations in their actual framework. Differential evolution (DE) is a powerful evolutionary algorithm (EA) well-known for its ability and efficiency as a single peak global optimizer for continuous spaces. This article suggests a niching scheme integrated with DE for achieving a stable and efficient niching behavior by combining the newly proposed parent-centric mutation operator with synchronous crowding replacement rule. The proposed approach is designed by considering the difficulties associated with the problem dependent niching parameters (like niche radius) and does not make use of such control parameter. The mutation operator helps to maintain the population diversity at an optimum level by using well-defined local neighborhoods. Based on a comparative study involving 13 well-known state-of-the-art niching EAs tested on an extensive collection of benchmarks, we observe a consistent statistical superiority enjoyed by our proposed niching algorithm.

  15. Improved fuzzy PID controller design using predictive functional control structure.

    PubMed

    Wang, Yuzhong; Jin, Qibing; Zhang, Ridong

    2017-11-01

    In conventional PID scheme, the ensemble control performance may be unsatisfactory due to limited degrees of freedom under various kinds of uncertainty. To overcome this disadvantage, a novel PID control method that inherits the advantages of fuzzy PID control and the predictive functional control (PFC) is presented and further verified on the temperature model of a coke furnace. Based on the framework of PFC, the prediction of the future process behavior is first obtained using the current process input signal. Then, the fuzzy PID control based on the multi-step prediction is introduced to acquire the optimal control law. Finally, the case study on a temperature model of a coke furnace shows the effectiveness of the fuzzy PID control scheme when compared with conventional PID control and fuzzy self-adaptive PID control. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    NASA Astrophysics Data System (ADS)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  17. A hydroeconomic modeling framework for optimal integrated management of forest and water

    NASA Astrophysics Data System (ADS)

    Garcia-Prats, Alberto; del Campo, Antonio D.; Pulido-Velazquez, Manuel

    2016-10-01

    Forests play a determinant role in the hydrologic cycle, with water being the most important ecosystem service they provide in semiarid regions. However, this contribution is usually neither quantified nor explicitly valued. The aim of this study is to develop a novel hydroeconomic modeling framework for assessing and designing the optimal integrated forest and water management for forested catchments. The optimization model explicitly integrates changes in water yield in the stands (increase in groundwater recharge) induced by forest management and the value of the additional water provided to the system. The model determines the optimal schedule of silvicultural interventions in the stands of the catchment in order to maximize the total net benefit in the system. Canopy cover and biomass evolution over time were simulated using growth and yield allometric equations specific for the species in Mediterranean conditions. Silvicultural operation costs according to stand density and canopy cover were modeled using local cost databases. Groundwater recharge was simulated using HYDRUS, calibrated and validated with data from the experimental plots. In order to illustrate the presented modeling framework, a case study was carried out in a planted pine forest (Pinus halepensis Mill.) located in south-western Valencia province (Spain). The optimized scenario increased groundwater recharge. This novel modeling framework can be used in the design of a "payment for environmental services" scheme in which water beneficiaries could contribute to fund and promote efficient forest management operations.

  18. Acquisition and production of skilled behavior in dynamic decision-making tasks

    NASA Technical Reports Server (NTRS)

    Kirlik, Alex

    1993-01-01

    Summaries of the four projects completed during the performance of this research are included. The four projects described are: Perceptual Augmentation Aiding for Situation Assessment, Perceptual Augmentation Aiding for Dynamic Decision-Making and Control, Action Advisory Aiding for Dynamic Decision-Making and Control, and Display Design to Support Time-Constrained Route Optimization. Papers based on each of these projects are currently in preparation. The theoretical framework upon which the first three projects are based, Ecological Task Analysis, was also developed during the performance of this research, and is described in a previous report. A project concerned with modeling strategies in human control of a dynamic system was also completed during the performance of this research.

  19. Power system security enhancement through direct non-disruptive load control

    NASA Astrophysics Data System (ADS)

    Ramanathan, Badri Narayanan

    The transition to a competitive market structure raises significant concerns regarding reliability of the power grid. A need to build tools for security assessment that produce operating limit boundaries for both static and dynamic contingencies is recognized. Besides, an increase in overall uncertainty in operating conditions makes corrective actions at times ineffective leaving the system vulnerable to instability. The tools that are in place for stability enhancement are mostly corrective and suffer from lack of robustness to operating condition changes. They often pose serious coordination challenges. With deregulation, there have also been ownership and responsibility issues associated with stability controls. However, the changing utility business model and the developments in enabling technologies such as two-way communication, metering, and control open up several new possibilities for power system security enhancement. This research proposes preventive modulation of selected loads through direct control for power system security enhancement. Two main contributions of this research are the following: development of an analysis framework and two conceptually different analysis approaches for load modulation to enhance oscillatory stability, and the development and study of algorithms for real-time modulation of thermostatic loads. The underlying analysis framework is based on the Structured Singular Value (SSV or mu) theory. Based on the above framework, two fundamentally different approaches towards analysis of the amount of load modulation for desired stability performance have been developed. Both the approaches have been tested on two different test systems: CIGRE Nordic test system and an equivalent of the Western Electric Coordinating Council test system. This research also develops algorithms for real-time modulation of thermostatic loads that use the results of the analysis. In line with some recent load management programs executed by utilities, two different algorithms based on dynamic programming are proposed for air-conditioner loads, while a decision-tree based algorithm is proposed for water-heater loads. An optimization framework has been developed employing the above algorithms. Monte Carlo simulations have been performed using this framework with the objective of studying the impact of different parameters and constraints on the effectiveness as well as the effect of control. The conclusions drawn from this research strongly advocate direct load control for stability enhancement from the perspectives of robustness and coordination, as well as economic viability and the developments towards availability of the institutional framework for load participation in providing system reliability services.

  20. Optimal bioprocess design through a gene regulatory network - growth kinetic hybrid model: Towards Replacing Monod kinetics.

    PubMed

    Tsipa, Argyro; Koutinas, Michalis; Usaku, Chonlatep; Mantalaris, Athanasios

    2018-05-02

    Currently, design and optimisation of biotechnological bioprocesses is performed either through exhaustive experimentation and/or with the use of empirical, unstructured growth kinetics models. Whereas, elaborate systems biology approaches have been recently explored, mixed-substrate utilisation is predominantly ignored despite its significance in enhancing bioprocess performance. Herein, bioprocess optimisation for an industrially-relevant bioremediation process involving a mixture of highly toxic substrates, m-xylene and toluene, was achieved through application of a novel experimental-modelling gene regulatory network - growth kinetic (GRN-GK) hybrid framework. The GRN model described the TOL and ortho-cleavage pathways in Pseudomonas putida mt-2 and captured the transcriptional kinetics expression patterns of the promoters. The GRN model informed the formulation of the growth kinetics model replacing the empirical and unstructured Monod kinetics. The GRN-GK framework's predictive capability and potential as a systematic optimal bioprocess design tool, was demonstrated by effectively predicting bioprocess performance, which was in agreement with experimental values, when compared to four commonly used models that deviated significantly from the experimental values. Significantly, a fed-batch biodegradation process was designed and optimised through the model-based control of TOL Pr promoter expression resulting in 61% and 60% enhanced pollutant removal and biomass formation, respectively, compared to the batch process. This provides strong evidence of model-based bioprocess optimisation at the gene level, rendering the GRN-GK framework as a novel and applicable approach to optimal bioprocess design. Finally, model analysis using global sensitivity analysis (GSA) suggests an alternative, systematic approach for model-driven strain modification for synthetic biology and metabolic engineering applications. Copyright © 2018. Published by Elsevier Inc.

  1. Dominion. A game exploring information exploitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hobbs, Jacob Aaron

    FlipIt is a game theoretic framework published in 2012[1] to investigate optimal strategies for managing security resources in response to Advanced Persistent Threats. It is a two-player game wherein a resource is controlled by exactly one player at any time. A player may move at any time to capture the resource, incurring a move cost, and is informed of the last time their opponent has moved only upon completing their move. Thus, moves may be wasted and takeover is considered \\stealthy", with regard to the other player. The game is played for an unlimited period of time, and the goalmore » of each player is to maximize the amount of time they are in control of the resource minus their total move cost, normalized by the current length of play. Marten Van Dijk and others[1] provided an analysis of various player strategies and proved optimal results for certain subclasses of players. We extend their work by providing a reformulation of the original game, wherein the optimal player strategies can be solved exactly, rather than only for certain subclasses. We call this reformulation Dominion, and place it within a broader framework of stealthy move games. We de ne Dominion to occur over a nite time scale (from 0 to 1), and give each player a certain number of moves to make within the time frame. Their expected score in this new scenario is the expected amount of time they have control, and the point of the game is to dominate as much of the unit interval as possible. We show how Dominion can be treated as a two player, simultaneous, constant sum, unit square game, where the gradient of the bene t curves for the players are linear and possibly discontinuous. We derive Nash equilibria for a basic version of Dominion, and then further explore the roles of information asymmetry in its variants. We extend these results to FlipIt and other cyber security applications.« less

  2. SU-F-T-342: Dosimetric Constraint Prediction Guided Automatic Mulit-Objective Optimization for Intensity Modulated Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, T; Zhou, L; Li, Y

    Purpose: For intensity modulated radiotherapy, the plan optimization is time consuming with difficulties of selecting objectives and constraints, and their relative weights. A fast and automatic multi-objective optimization algorithm with abilities to predict optimal constraints and manager their trade-offs can help to solve this problem. Our purpose is to develop such a framework and algorithm for a general inverse planning. Methods: There are three main components contained in this proposed multi-objective optimization framework: prediction of initial dosimetric constraints, further adjustment of constraints and plan optimization. We firstly use our previously developed in-house geometry-dosimetry correlation model to predict the optimal patient-specificmore » dosimetric endpoints, and treat them as initial dosimetric constraints. Secondly, we build an endpoint(organ) priority list and a constraint adjustment rule to repeatedly tune these constraints from their initial values, until every single endpoint has no room for further improvement. Lastly, we implement a voxel-independent based FMO algorithm for optimization. During the optimization, a model for tuning these voxel weighting factors respecting to constraints is created. For framework and algorithm evaluation, we randomly selected 20 IMRT prostate cases from the clinic and compared them with our automatic generated plans, in both the efficiency and plan quality. Results: For each evaluated plan, the proposed multi-objective framework could run fluently and automatically. The voxel weighting factor iteration time varied from 10 to 30 under an updated constraint, and the constraint tuning time varied from 20 to 30 for every case until no more stricter constraint is allowed. The average total costing time for the whole optimization procedure is ∼30mins. By comparing the DVHs, better OAR dose sparing could be observed in automatic generated plan, for 13 out of the 20 cases, while others are with competitive results. Conclusion: We have successfully developed a fast and automatic multi-objective optimization for intensity modulated radiotherapy. This work is supported by the National Natural Science Foundation of China (No: 81571771)« less

  3. LQR-Based Optimal Distributed Cooperative Design for Linear Discrete-Time Multiagent Systems.

    PubMed

    Zhang, Huaguang; Feng, Tao; Liang, Hongjing; Luo, Yanhong

    2017-03-01

    In this paper, a novel linear quadratic regulator (LQR)-based optimal distributed cooperative design method is developed for synchronization control of general linear discrete-time multiagent systems on a fixed, directed graph. Sufficient conditions are derived for synchronization, which restrict the graph eigenvalues into a bounded circular region in the complex plane. The synchronizing speed issue is also considered, and it turns out that the synchronizing region reduces as the synchronizing speed becomes faster. To obtain more desirable synchronizing capacity, the weighting matrices are selected by sufficiently utilizing the guaranteed gain margin of the optimal regulators. Based on the developed LQR-based cooperative design framework, an approximate dynamic programming technique is successfully introduced to overcome the (partially or completely) model-free cooperative design for linear multiagent systems. Finally, two numerical examples are given to illustrate the effectiveness of the proposed design methods.

  4. A Distribution-class Locational Marginal Price (DLMP) Index for Enhanced Distribution Systems

    NASA Astrophysics Data System (ADS)

    Akinbode, Oluwaseyi Wemimo

    The smart grid initiative is the impetus behind changes that are expected to culminate into an enhanced distribution system with the communication and control infrastructure to support advanced distribution system applications and resources such as distributed generation, energy storage systems, and price responsive loads. This research proposes a distribution-class analog of the transmission LMP (DLMP) as an enabler of the advanced applications of the enhanced distribution system. The DLMP is envisioned as a control signal that can incentivize distribution system resources to behave optimally in a manner that benefits economic efficiency and system reliability and that can optimally couple the transmission and the distribution systems. The DLMP is calculated from a two-stage optimization problem; a transmission system OPF and a distribution system OPF. An iterative framework that ensures accurate representation of the distribution system's price sensitive resources for the transmission system problem and vice versa is developed and its convergence problem is discussed. As part of the DLMP calculation framework, a DCOPF formulation that endogenously captures the effect of real power losses is discussed. The formulation uses piecewise linear functions to approximate losses. This thesis explores, with theoretical proofs, the breakdown of the loss approximation technique when non-positive DLMPs/LMPs occur and discusses a mixed integer linear programming formulation that corrects the breakdown. The DLMP is numerically illustrated in traditional and enhanced distribution systems and its superiority to contemporary pricing mechanisms is demonstrated using price responsive loads. Results show that the impact of the inaccuracy of contemporary pricing schemes becomes significant as flexible resources increase. At high elasticity, aggregate load consumption deviated from the optimal consumption by up to about 45 percent when using a flat or time-of-use rate. Individual load consumption deviated by up to 25 percent when using a real-time price. The superiority of the DLMP is more pronounced when important distribution network conditions are not reflected by contemporary prices. The individual load consumption incentivized by the real-time price deviated by up to 90 percent from the optimal consumption in a congested distribution network. While the DLMP internalizes congestion management, the consumption incentivized by the real-time price caused overloads.

  5. Chance-Constrained Guidance With Non-Convex Constraints

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro

    2011-01-01

    Missions to small bodies, such as comets or asteroids, require autonomous guidance for descent to these small bodies. Such guidance is made challenging by uncertainty in the position and velocity of the spacecraft, as well as the uncertainty in the gravitational field around the small body. In addition, the requirement to avoid collision with the asteroid represents a non-convex constraint that means finding the optimal guidance trajectory, in general, is intractable. In this innovation, a new approach is proposed for chance-constrained optimal guidance with non-convex constraints. Chance-constrained guidance takes into account uncertainty so that the probability of collision is below a specified threshold. In this approach, a new bounding method has been developed to obtain a set of decomposed chance constraints that is a sufficient condition of the original chance constraint. The decomposition of the chance constraint enables its efficient evaluation, as well as the application of the branch and bound method. Branch and bound enables non-convex problems to be solved efficiently to global optimality. Considering the problem of finite-horizon robust optimal control of dynamic systems under Gaussian-distributed stochastic uncertainty, with state and control constraints, a discrete-time, continuous-state linear dynamics model is assumed. Gaussian-distributed stochastic uncertainty is a more natural model for exogenous disturbances such as wind gusts and turbulence than the previously studied set-bounded models. However, with stochastic uncertainty, it is often impossible to guarantee that state constraints are satisfied, because there is typically a non-zero probability of having a disturbance that is large enough to push the state out of the feasible region. An effective framework to address robustness with stochastic uncertainty is optimization with chance constraints. These require that the probability of violating the state constraints (i.e., the probability of failure) is below a user-specified bound known as the risk bound. An example problem is to drive a car to a destination as fast as possible while limiting the probability of an accident to 10(exp -7). This framework allows users to trade conservatism against performance by choosing the risk bound. The more risk the user accepts, the better performance they can expect.

  6. A Comparative Study of Probability Collectives Based Multi-agent Systems and Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Huang, Chien-Feng; Wolpert, David H.; Bieniawski, Stefan; Strauss, Charles E. M.

    2005-01-01

    We compare Genetic Algorithms (GA's) with Probability Collectives (PC), a new framework for distributed optimization and control. In contrast to GA's, PC-based methods do not update populations of solutions. Instead they update an explicitly parameterized probability distribution p over the space of solutions. That updating of p arises as the optimization of a functional of p. The functional is chosen so that any p that optimizes it should be p peaked about good solutions. The PC approach works in both continuous and discrete problems. It does not suffer from the resolution limitation of the finite bit length encoding of parameters into GA alleles. It also has deep connections with both game theory and statistical physics. We review the PC approach using its motivation as the information theoretic formulation of bounded rationality for multi-agent systems. It is then compared with GA's on a diverse set of problems. To handle high dimensional surfaces, in the PC method investigated here p is restricted to a product distribution. Each distribution in that product is controlled by a separate agent. The test functions were selected for their difficulty using either traditional gradient descent or genetic algorithms. On those functions the PC-based approach significantly outperforms traditional GA's in both rate of descent, trapping in false minima, and long term optimization.

  7. A common optimization principle for motor execution in healthy subjects and parkinsonian patients.

    PubMed

    Baraduc, Pierre; Thobois, Stéphane; Gan, Jing; Broussolle, Emmanuel; Desmurget, Michel

    2013-01-09

    Recent research on Parkinson's disease (PD) has emphasized that parkinsonian movement, although bradykinetic, shares many attributes with healthy behavior. This observation led to the suggestion that bradykinesia in PD could be due to a reduction in motor motivation. This hypothesis can be tested in the framework of optimal control theory, which accounts for many characteristics of healthy human movement while providing a link between the motor behavior and a cost/benefit trade-off. This approach offers the opportunity to interpret movement deficits of PD patients in the light of a computational theory of normal motor control. We studied 14 PD patients with bilateral subthalamic nucleus (STN) stimulation and 16 age-matched healthy controls, and tested whether reaching movements were governed by similar rules in these two groups. A single optimal control model accounted for the reaching movements of healthy subjects and PD patients, whatever the condition of STN stimulation (on or off). The choice of movement speed was explained in all subjects by the existence of a preset dynamic range for the motor signals. This range was idiosyncratic and applied to all movements regardless of their amplitude. In PD patients this dynamic range was abnormally narrow and correlated with bradykinesia. STN stimulation reduced bradykinesia and widened this range in all patients, but did not restore it to a normal value. These results, consistent with the motor motivation hypothesis, suggest that constrained optimization of motor effort is the main determinant of movement planning (choice of speed) and movement production, in both healthy and PD subjects.

  8. On the Use of CAD and Cartesian Methods for Aerodynamic Optimization

    NASA Technical Reports Server (NTRS)

    Nemec, M.; Aftosmis, M. J.; Pulliam, T. H.

    2004-01-01

    The objective for this paper is to present the development of an optimization capability for Curt3D, a Cartesian inviscid-flow analysis package. We present the construction of a new optimization framework and we focus on the following issues: 1) Component-based geometry parameterization approach using parametric-CAD models and CAPRI. A novel geometry server is introduced that addresses the issue of parallel efficiency while only sparingly consuming CAD resources; 2) The use of genetic and gradient-based algorithms for three-dimensional aerodynamic design problems. The influence of noise on the optimization methods is studied. Our goal is to create a responsive and automated framework that efficiently identifies design modifications that result in substantial performance improvements. In addition, we examine the architectural issues associated with the deployment of a CAD-based approach in a heterogeneous parallel computing environment that contains both CAD workstations and dedicated compute engines. We demonstrate the effectiveness of the framework for a design problem that features topology changes and complex geometry.

  9. The cost of hybrid waste water systems: A systematic framework for specifying minimum cost-connection rates.

    PubMed

    Eggimann, Sven; Truffer, Bernhard; Maurer, Max

    2016-10-15

    To determine the optimal connection rate (CR) for regional waste water treatment is a challenge that has recently gained the attention of academia and professional circles throughout the world. We contribute to this debate by proposing a framework for a total cost assessment of sanitation infrastructures in a given region for the whole range of possible CRs. The total costs comprise the treatment and transportation costs of centralised and on-site waste water management systems relative to specific CRs. We can then identify optimal CRs that either deliver waste water services at the lowest overall regional cost, or alternatively, CRs that result from households freely choosing whether they want to connect or not. We apply the framework to a Swiss region, derive a typology for regional cost curves and discuss whether and by how much the empirically observed CRs differ from the two optimal ones. Both optimal CRs may be reached by introducing specific regulatory incentive structures. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Feedback stabilization of an oscillating vertical cylinder by POD Reduced-Order Model

    NASA Astrophysics Data System (ADS)

    Tissot, Gilles; Cordier, Laurent; Noack, Bernd R.

    2015-01-01

    The objective is to demonstrate the use of reduced-order models (ROM) based on proper orthogonal decomposition (POD) to stabilize the flow over a vertically oscillating circular cylinder in the laminar regime (Reynolds number equal to 60). The 2D Navier-Stokes equations are first solved with a finite element method, in which the moving cylinder is introduced via an ALE method. Since in fluid-structure interaction, the POD algorithm cannot be applied directly, we implemented the fictitious domain method of Glowinski et al. [1] where the solid domain is treated as a fluid undergoing an additional constraint. The POD-ROM is classically obtained by projecting the Navier-Stokes equations onto the first POD modes. At this level, the cylinder displacement is enforced in the POD-ROM through the introduction of Lagrange multipliers. For determining the optimal vertical velocity of the cylinder, a linear quadratic regulator framework is employed. After linearization of the POD-ROM around the steady flow state, the optimal linear feedback gain is obtained as solution of a generalized algebraic Riccati equation. Finally, when the optimal feedback control is applied, it is shown that the flow converges rapidly to the steady state. In addition, a vanishing control is obtained proving the efficiency of the control approach.

  11. Rational decision-making in inhibitory control.

    PubMed

    Shenoy, Pradeep; Yu, Angela J

    2011-01-01

    An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability.

  12. Rational Decision-Making in Inhibitory Control

    PubMed Central

    Shenoy, Pradeep; Yu, Angela J.

    2011-01-01

    An important aspect of cognitive flexibility is inhibitory control, the ability to dynamically modify or cancel planned actions in response to changes in the sensory environment or task demands. We formulate a probabilistic, rational decision-making framework for inhibitory control in the stop signal paradigm. Our model posits that subjects maintain a Bayes-optimal, continually updated representation of sensory inputs, and repeatedly assess the relative value of stopping and going on a fine temporal scale, in order to make an optimal decision on when and whether to go on each trial. We further posit that they implement this continual evaluation with respect to a global objective function capturing the various reward and penalties associated with different behavioral outcomes, such as speed and accuracy, or the relative costs of stop errors and go errors. We demonstrate that our rational decision-making model naturally gives rise to basic behavioral characteristics consistently observed for this paradigm, as well as more subtle effects due to contextual factors such as reward contingencies or motivational factors. Furthermore, we show that the classical race model can be seen as a computationally simpler, perhaps neurally plausible, approximation to optimal decision-making. This conceptual link allows us to predict how the parameters of the race model, such as the stopping latency, should change with task parameters and individual experiences/ability. PMID:21647306

  13. Grid integration and smart grid implementation of emerging technologies in electric power systems through approximate dynamic programming

    NASA Astrophysics Data System (ADS)

    Xiao, Jingjie

    A key hurdle for implementing real-time pricing of electricity is a lack of consumers' responses. Solutions to overcome the hurdle include the energy management system that automatically optimizes household appliance usage such as plug-in hybrid electric vehicle charging (and discharging with vehicle-to-grid) via a two-way communication with the grid. Real-time pricing, combined with household automation devices, has a potential to accommodate an increasing penetration of plug-in hybrid electric vehicles. In addition, the intelligent energy controller on the consumer-side can help increase the utilization rate of the intermittent renewable resource, as the demand can be managed to match the output profile of renewables, thus making the intermittent resource such as wind and solar more economically competitive in the long run. One of the main goals of this dissertation is to present how real-time retail pricing, aided by control automation devices, can be integrated into the wholesale electricity market under various uncertainties through approximate dynamic programming. What distinguishes this study from the existing work in the literature is that whole- sale electricity prices are endogenously determined as we solve a system operator's economic dispatch problem on an hourly basis over the entire optimization horizon. This modeling and algorithm framework will allow a feedback loop between electricity prices and electricity consumption to be fully captured. While we are interested in a near-optimal solution using approximate dynamic programming; deterministic linear programming benchmarks are use to demonstrate the quality of our solutions. The other goal of the dissertation is to use this framework to provide numerical evidence to the debate on whether real-time pricing is superior than the current flat rate structure in terms of both economic and environmental impacts. For this purpose, the modeling and algorithm framework is tested on a large-scale test case with hundreds of power plants based on data available for California, making our findings useful for policy makers, system operators and utility companies to gain a concrete understanding on the scale of the impact with real-time pricing.

  14. Complex optimization for big computational and experimental neutron datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  15. Complex optimization for big computational and experimental neutron datasets

    DOE PAGES

    Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...

    2016-11-07

    Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less

  16. A transmission power optimization with a minimum node degree for energy-efficient wireless sensor networks with full-reachability.

    PubMed

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-03-20

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments.

  17. A Transmission Power Optimization with a Minimum Node Degree for Energy-Efficient Wireless Sensor Networks with Full-Reachability

    PubMed Central

    Chen, Yi-Ting; Horng, Mong-Fong; Lo, Chih-Cheng; Chu, Shu-Chuan; Pan, Jeng-Shyang; Liao, Bin-Yih

    2013-01-01

    Transmission power optimization is the most significant factor in prolonging the lifetime and maintaining the connection quality of wireless sensor networks. Un-optimized transmission power of nodes either interferes with or fails to link neighboring nodes. The optimization of transmission power depends on the expected node degree and node distribution. In this study, an optimization approach to an energy-efficient and full reachability wireless sensor network is proposed. In the proposed approach, an adjustment model of the transmission range with a minimum node degree is proposed that focuses on topology control and optimization of the transmission range according to node degree and node density. The model adjusts the tradeoff between energy efficiency and full reachability to obtain an ideal transmission range. In addition, connectivity and reachability are used as performance indices to evaluate the connection quality of a network. The two indices are compared to demonstrate the practicability of framework through simulation results. Furthermore, the relationship between the indices under the conditions of various node degrees is analyzed to generalize the characteristics of node densities. The research results on the reliability and feasibility of the proposed approach will benefit the future real deployments. PMID:23519351

  18. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  19. Big Data Challenges of High-Dimensional Continuous-Time Mean-Variance Portfolio Selection and a Remedy.

    PubMed

    Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying

    2017-08-01

    Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.

  20. A modelling framework for predicting the optimal balance between control and surveillance effort in the local eradication of tuberculosis in New Zealand wildlife.

    PubMed

    Gormley, Andrew M; Holland, E Penelope; Barron, Mandy C; Anderson, Dean P; Nugent, Graham

    2016-03-01

    Bovine tuberculosis (TB) impacts livestock farming in New Zealand, where the introduced marsupial brushtail possum (Trichosurus vulpecula) is the wildlife maintenance host for Mycobacterium bovis. New Zealand has implemented a campaign to control TB using a co-ordinated programme of livestock diagnostic testing and large-scale culling of possums, with the long-term aim of TB eradication. For management of the disease in wildlife, methods that can optimise the balance between control and surveillance effort will facilitate the objective of eradication on a fixed or limited budget. We modelled and compared management options to optimise the balance between the two activities necessary to achieve and verify eradication of TB from New Zealand wildlife: the number of lethal population control operations required to halt the M. bovis infection cycle in possums, and the subsequent surveillance effort needed to confidently declare TB freedom post-control. The approach considered the costs of control and surveillance, as well as the potential costs of re-control resulting from false declaration of TB freedom. The required years of surveillance decreased with increasing numbers of possum lethal control operations but the overall time to declare TB freedom depended on additional factors, such as the probability of freedom from disease after control and the probability of success of mop-up control, i.e. retroactive culling following detection of persistent disease in the residual possum population. The total expected cost was also dependent on a number of factors, many of which had wide cost ranges, suggesting that an optimal strategy is unlikely to be singular and fixed, but will likely vary for each different area being considered. Our approach provides a simple framework that considers the known and potential costs of possum control and TB surveillance, enabling managers to optimise the balance between these two activities to achieve and prove eradication of a wildlife disease, or the pest species that transmits it, in the most expedient and economic way. This cost- and risk-evaluation approach may be applicable to other wildlife disease problems where limited management funds exist. Copyright © 2016 Elsevier B.V. All rights reserved.

Top